fastText
fairseq
fastText | fairseq | |
---|---|---|
8 | 89 | |
25,505 | 29,301 | |
- | 0.7% | |
6.0 | 6.0 | |
about 2 months ago | about 22 hours ago | |
HTML | Python | |
MIT License | MIT License |
Stars - the number of stars that a project has on GitHub. Growth - month over month growth in stars.
Activity is a relative number indicating how actively a project is being developed. Recent commits have higher weight than older ones.
For example, an activity of 9.0 indicates that a project is amongst the top 10% of the most actively developed projects that we are tracking.
fastText
- FastText Repo Archived
-
Pixelfed and Naive Bayes: The Grandfather of Spam Filters Still Making Waves
- trained with cross-entropy, meaning that model scores can be used more effectively as a 'confidence' - e.g. for spam if you want to say something like "if prediction score > X, then filter", Naive Bayes is not ideal due to the 'naive' assumption which makes the scores very un-calibrated (it tends to give extremely high or low confidence scores for most things).
disclaimer: I haven't really thought about NLP for about 3 years so there may be something better than this now
[1] https://github.com/facebookresearch/fastText
-
How worried are you about AI taking over music?
fasttext 50
- FLiP Stack Weekly for 06-Jan-2023
- Fasttext: Library for efficient text classification and representation learning
-
Reverse Language Reconstructing by Consensus [D] [P]
https://github.com/facebookresearch/fastText the readme may have what I need built in. But not sure. I hate ML documentation. I would love to see data input to data output examples because people expect us to understand their line of thought, and it just doesn't work out that way. This looks like what I need, but I've completely misinterpreted ML documentation many times. Ha
-
Virtual Sommelier, text classifier in the browser
To use the model trained with FastText from the browser, it is necessary to load it via WebAssembly. However, you don't require a WebAssembly knowledge as you can use the fasttext.js file which has all the glue code.
-
Synonyms.vim: feedback needed.
Having the backend code in the plugin repo, and in python held me off. I wrote it to split vimscript/python from the command that finds the info, as it allows to use powerful tools like fasttext rather than a dictionary.
fairseq
- Sequence-to-Sequence Toolkit Written in Python
- Unsupervised (Semi-Supervised) ASR/STT training recipes
-
Nvidia's 900 tons of GPU muscle bulks up server market, slims down wallets
> Is there really no way to partition the workload to run with 16gb memory per card?
It really depends and this can get really complicated really fast. I'll give a tldr and then a longer explanation.
TLDR:
Yes, you can easily split networks up. If your main bottleneck is batch size (i.e. training) then there aren't huge differences in spreading across multiple GPUs assuming you have good interconnects (GPU direct is supported). If you're running inference and the model fits on the card you're probably fine too unless you need to do things like fancy inference batching (i.e. you have LOTS of users)
Longer version:
You can always split things up. If we think about networks we recognize some nice properties about how they operate as mathematical groups. Non-residual networks are compositional, meaning each layer can be treated as a sub network (every residual block can be treated this way too). Additionally, we may have associative and distributive properties depending on the architecture (some even have commutative!). So we can use these same rules to break apart networks in many different ways. There are often performance hits for doing this though, as it practically requires you touching the disk more often but in some more rare cases (at least to me, let me know if you know more) they can help.
I mentioned the batching above and this can get kinda complicated. There are actually performance differences when you batch in groups of data (i.e. across GPUs) compared to batching on a single accelerator. This difference isn't talked about a lot. But it is going to come down to how often your algorithm depends on batching and what operations are used, such as batch norm. The batch norm is calculated across the GPU's batch, not the distributed batch (unless you introduce blocking). This is because your gradients AND inference are going to be computed differently. In DDP your whole network is cloned across cards so you basically run inference on multiple networks and then do an all reduce on the loss then calculate the gradient and then recopy the weights to all cards. There is even a bigger difference when you use lazy regularization (don't compute gradients for n-minibatches). GANs are notorious for using this and personally I've seen large benefits to distributed training for these. GANs usually have small batch sizes and aren't getting anywhere near the memory of the card anyways (GANs are typically unstable so large batch sizes can harm them), but also pay attention to this when evaluating papers (of course as well as how much hyper-parameter tuning has been done. This is always tricky when comparing works, especially between academia and big labs. You can easily be fooled by which is a better model. Evaluating models is way tougher than people give credit to and especially in the modern era of LLMs. I could rant a lot about just this alone). Basically in short, we can think of this as an ensembling method, except our models are actually identical (you could parallel reduce lazily too and that will create some periodic divergence between your models but that's not important for conceptually understanding, just worth noting).
There is are also techniques to split a single model up called model sharding and checkpointing. Model sharding is where you split a single model across multiple GPUs. You're taking advantage of the compositional property of networks, meaning that as long as there isn't a residual layer between your split location you can actually treat one network as a series of smaller networks. This has obvious drawbacks as you need to feed one into another and so the operations have to be synchronous, but sometimes this isn't too bad. Checkpointing is very similar but you're just doing the same thing on the same GPU. Your hit here is in I/O, but may or may not be too bad with GPU Direct and highly depends on your model size (were you splitting because batch size or because model size?).
This is all still pretty high level but if you want to dig into it more META developed a toolkit called fairseq that will do a lot of this for you and they optimized it
https://engineering.fb.com/2021/07/15/open-source/fsdp/
https://github.com/facebookresearch/fairseq
TLDR: really depends on your use case, but it is a good question.
-
Talk back and forth with AI like you would with a person
How do they do the text to voice conversion so fast? https://github.com/facebookresearch/fairseq/tree/main (open source takes sub-minute to do text to voice.
-
Voice generation AI (TTS)
It might be worth checking out Meta's TTS tho, I haven't gotten the chance to fiddle around with it but it looks somewhat promising https://github.com/facebookresearch/fairseq/tree/main/examples/mms
-
Translation app with TTS (text-to-speech) for Persian?
They have instructions on how to use it in command line and a notebook on how to use it as a python library.
- Why no work on open source TTS (Text to speech) models
-
Meta's Massively Multilingual Speech project supports 1k languages using self supervised learning
Github - https://github.com/facebookresearch/fairseq/tree/main/examples/mms Paper - https://research.facebook.com/publications/scaling-speech-technology-to-1000-languages/
-
AI — weekly megathread!
Meta released a new open-source model, Massively Multilingual Speech (MMS) that can do both speech-to-text and text-to-speech in 1,107 languages and can also recognize 4,000+ spoken languages. Existing speech recognition models only cover approximately 100 languages out of the 7,000+ known spoken languages. [Details | Research Paper | GitHub].
- Meta's MMS: Scaling Speech Technology to 1000+ languages (How to Run colab)
What are some alternatives?
Opus-MT - Open neural machine translation models and web services
gpt-neox - An implementation of model parallel autoregressive transformers on GPUs, based on the DeepSpeed library.
synonyms.vim - Finding synonyms of words within vim, save time going back and forth to thesaurus.
transformers - 🤗 Transformers: State-of-the-art Machine Learning for Pytorch, TensorFlow, and JAX.
talk - Group video call for the web. No signups. No downloads. [Moved to: https://github.com/vasanthv/tlk]
DeepSpeed - DeepSpeed is a deep learning optimization library that makes distributed training and inference easy, efficient, and effective.
TRIME - [EMNLP 2022] Training Language Models with Memory Augmentation https://arxiv.org/abs/2205.12674
text-to-text-transfer-transformer - Code for the paper "Exploring the Limits of Transfer Learning with a Unified Text-to-Text Transformer"
Gauss - Stable Diffusion macOS native app
espnet - End-to-End Speech Processing Toolkit
React - The library for web and native user interfaces.
Pytorch - Tensors and Dynamic neural networks in Python with strong GPU acceleration