t5x
t5-pytorch
t5x | t5-pytorch | |
---|---|---|
7 | 1 | |
2,503 | 40 | |
2.3% | - | |
8.5 | 3.1 | |
3 days ago | 6 months ago | |
Python | Python | |
Apache License 2.0 | MIT License |
Stars - the number of stars that a project has on GitHub. Growth - month over month growth in stars.
Activity is a relative number indicating how actively a project is being developed. Recent commits have higher weight than older ones.
For example, an activity of 9.0 indicates that a project is amongst the top 10% of the most actively developed projects that we are tracking.
t5x
-
Maxtext: A simple, performant and scalable Jax LLM
[3]: https://github.com/google-research/t5x
Asking because I have worked extensively on training a large model on a TPU cluster, and started with Levanter, then tried MaxText, and finally ended up on EasyLM. My thoughts are:
- Levanter is well intentioned but is unproven and lacking in features. For instance, their sharding is odd in that it requires embedding dimension to be a multiple of the number of devices, so I can't test using a model with embedding dimension 768 on a 512-device pod. Lost confidence in Levanter after finding some glaring correctness bugs (and helping get them fixed). Also, while I'm a huge fan of Equinox's approach, it's sadly underdeveloped (for instance, there's no way to specify non-default weight initialization strategies without manually doing model surgery to set weights).
- MaxText was just very difficult to work with. We felt like we were fighting against it every time we needed to change something because we would be digging through numerous needless layers of abstraction. My favorite was after one long day of debugging, I found a function who's only purpose was to pass its arguments to another function untouched; this function's only purpose was to pass its arguments untouched to a new, third function, that then slightly changed them and passed them to a fourth function that did the work.
- EasyLM is, as the name says, easy. But on a deeper dive, the sharding functionality seems to be underdeveloped. What they call "FSDP" is not necessarily true FSDP, it's literally just a certain axis that the JAX mesh is being sharded around that happens to shard some data axes and some model weight axes.
I'm still searching for a "perfect" JAX LLM codebase - any pointers?
-
Mixtral of Experts
> Are you using a normal training script i.e. "continued pretraining" on ALL parameters with just document fragments rather than input output pairs?
Yes, this one.
> do you make a custom dataset that has qa pairs about that particular knowledgebase?
This one. Once you have a checkpoint w knowledge, it makes sense to finetune. You can use either LORA or PEFT. We do it depending on the case. (some orgs have like millions of tokens and i am not that confident that PEFT).
LoRA with raw document text may not work, haven't tried that. Google has a good example of training scripts here: https://github.com/google-research/t5x (under training. and then finetuning). I like this one. Facebook Research also has a few on their repo.
If you are just looking to scrape by, I would suggest just do what they tell you to do. You can offer suggestions, but better let them take the call. A lot of fluff, a lot of chatter online, so everyone is figuring out stuff.
One note about pretraining is that it is costly, so most OSS devs just do direct finetuning/LoRA. Works because their dataset is from the open internet. Orgs aren't finding much value with these. And yet, many communities are filled with these tactics.
-
Mixtures of Experts
Google have released the models and code for the Switch Transformer from Fedus et al. (2021) under the Apache 2.0 licence. [0]
There's also OpenMoE - an open-source effort to train a mixture of experts model. Currently they've released a model with 8 billion parameters. [1]
[0] https://github.com/google-research/t5x/blob/main/docs/models...
[1] https://github.com/XueFuzhao/OpenMoE
- [D] ClosedAI license, open-source license which restricts only OpenAI, Microsoft, Google, and Meta from commercial use
-
[P] T5 Implementation in PyTorch
You can find the official T5x repository by Google AI here: https://github.com/google-research/t5x
-
Google AI Introduces Confident Adaptive Language Modeling (CALM) For 3x Faster Text Generation With Language Models (LMs)
Quick Read: https://www.marktechpost.com/2022/12/20/google-ai-introduces-confident-adaptive-language-modeling-calm-for-3x-faster-text-generation-with-language-models-lms/ Paper: https://arxiv.org/pdf/2207.07061.pdf Code: https://github.com/google-research/t5x/tree/main/t5x/contrib/calm
-
New free open source 20B parameter model (Not GPT Neo) achieves state-of-the-art results (SOTA) and outperforms GPT-3
From Section 9.1 in the paper, it looks like the weights in the Google buckets are associated with the T5X model(s?) here: https://github.com/google-research/t5x
t5-pytorch
-
[P] T5 Implementation in PyTorch
Link to the repository: https://github.com/conceptofmind/t5-pytorch
What are some alternatives?
google-research - Google Research
text-to-text-transfer-transformer - Code for the paper "Exploring the Limits of Transfer Learning with a Unified Text-to-Text Transformer"
bad-licenses - A compendium of absurd open-source licenses.
RETRO-pytorch - Implementation of RETRO, Deepmind's Retrieval based Attention net, in Pytorch
Flux.jl - Relax! Flux is the ML library that doesn't make you tensor
flamingo-pytorch - Implementation of 🦩 Flamingo, state-of-the-art few-shot visual question answering attention net out of Deepmind, in Pytorch
darwin-xnu - Legacy mirror of Darwin Kernel. Replaced by https://github.com/apple-oss-distributions/xnu
x-transformers - A simple but complete full-attention transformer with a set of promising experimental features from various papers
OpenMoE - A family of open-sourced Mixture-of-Experts (MoE) Large Language Models
performer-pytorch - An implementation of Performer, a linear attention-based transformer, in Pytorch