-
LaMDA-rlhf-pytorch
Discontinued Open-source pre-training implementation of Google's LaMDA in PyTorch. Adding RLHF similar to ChatGPT.
-
Scout Monitoring
Free Django app performance insights with Scout Monitoring. Get Scout setup in minutes, and let us sweat the small stuff. A couple lines in settings.py is all you need to start monitoring your apps. Sign up for our free tier today.
An open-source implementation for the pre-training architecture of Google's LaMDA in PyTorch. The research paper outlines an autoregressive, decoder-only, GPT-like transformer language model. The transformer uses T5 relative positional bias in the attention layers and gated-GELU activation function in the feed-forward layers.
The repository currently contains a script for basic training as well as Huggingface datasets and Weights & Biases integration.
LaMDA research paper: https://arxiv.org/abs/2201.08239
Github repository for the model: https://github.com/conceptofmind/LaMDA-pytorch
The pre-training architecture was peer-reviewed by Dr. Phil Wang. Please check out and support his work: https://github.com/lucidrains.
Updates: https://twitter.com/EnricoShippole