PaLM-pytorch
PaLM-flax
PaLM-pytorch | PaLM-flax | |
---|---|---|
3 | 1 | |
821 | 14 | |
- | - | |
0.0 | 4.2 | |
about 2 years ago | over 2 years ago | |
Python | Python | |
MIT License | MIT License |
Stars - the number of stars that a project has on GitHub. Growth - month over month growth in stars.
Activity is a relative number indicating how actively a project is being developed. Recent commits have higher weight than older ones.
For example, an activity of 9.0 indicates that a project is amongst the top 10% of the most actively developed projects that we are tracking.
PaLM-pytorch
-
Implementing the Gargantuan Pathways with Colossal-AI, easy and efficient!
We firstly reproduced the PaLM model architecture on one GPU according to the PaLM paper’s description. Here we have referred following repo for the reproduction: https://github.com/lucidrains/PaLM-pytorch
- [R] Google's 540B (Dense) model Pathways LLM, "Unlocks" new tasks proportional to scale
- Pathways Language Model (Palm): 540B Parameters for Breakthrough Perf
PaLM-flax
-
[R] Proprietary ML model in research paper
Google, Deepmind, and OpenAI normally provide a section in their research papers for replicating the pre-training and fine-tuning architectures of the models. For example, a replication of the pre-training architecture outlined in the LaMDA research paper in PyTorch (https://github.com/conceptofmind/LaMDA-pytorch/blob/main/lamda\_pytorch/lamda\_pytorch.py) or another implementation of Google's SOTA Pathways Language Model in JAX/FLAX (https://github.com/conceptofmind/PaLM-flax).
What are some alternatives?
CoCa-pytorch - Implementation of CoCa, Contrastive Captioners are Image-Text Foundation Models, in Pytorch
DALLE-pytorch - Implementation / replication of DALL-E, OpenAI's Text to Image Transformer, in Pytorch
nuwa-pytorch - Implementation of NÜWA, state of the art attention network for text to video synthesis, in Pytorch
x-transformers - A concise but complete full-attention transformer with a set of promising experimental features from various papers
PaLM-colossalai - Scalable PaLM implementation of PyTorch
RWKV-LM - RWKV is an RNN with transformer-level LLM performance. It can be directly trained like a GPT (parallelizable). So it's combining the best of RNN and transformer - great performance, fast inference, saves VRAM, fast training, "infinite" ctx_len, and free sentence embedding.
soundstorm-pytorch - Implementation of SoundStorm, Efficient Parallel Audio Generation from Google Deepmind, in Pytorch
whisper-timestamped - Multilingual Automatic Speech Recognition with word-level timestamps and confidence