ML-Optimizers-JAX
trax
Our great sponsors
ML-Optimizers-JAX | trax | |
---|---|---|
1 | 6 | |
40 | 7,948 | |
- | 0.6% | |
4.5 | 4.7 | |
almost 3 years ago | 3 months ago | |
Python | Python | |
- | Apache License 2.0 |
Stars - the number of stars that a project has on GitHub. Growth - month over month growth in stars.
Activity is a relative number indicating how actively a project is being developed. Recent commits have higher weight than older ones.
For example, an activity of 9.0 indicates that a project is amongst the top 10% of the most actively developed projects that we are tracking.
ML-Optimizers-JAX
-
ML Optimizers from scratch using JAX
Github link (includes a link to a Kaggle notebook to run it directly) - shreyansh26/ML-Optimizers-JAX
trax
-
Replit's new Code LLM was trained in 1 week
and the implementation https://github.com/google/trax/blob/master/trax/models/resea... if you are interested.
Hope you get to look into this!
-
RedPajama: Reproduction of Llama with Friendly License
Thank you for developing the pipeline and amassing considerable compute for gathering and preprocessing this dataset!
I'm not sure if this is the right place to ask about this, but could you consider training an LLM using a more advanced, sparse transformer architecture (specifically, "Terraformer" from this paper https://arxiv.org/abs/2111.12763 and this codebase https://github.com/google/trax/blob/master/trax/models/resea... by Google Brain and OpenAI)? I understand the pressure to focus on training a straightforward LLaMA replication, but of course you see that it's a legacy dense architecture which limits its inference performance. This new architecture is not just an academic curiosity but is already validated at scale by Google, providing 10x+ inference performance boost on the same hardware.
Frankly, the community's compute budget - for training and for inference - isn't infinite, and neither is the public's interest in models that do not have advantage (at least in convenience) over closed-source ones; and so we should utilize both those resources as efficiently as possible. It could be a big step forward if you trained at least LLaMA-Terraformer-7B and 13B foundation models on the whole dataset.
-
The founder of Gmail claims that ChatGPT can “kill” Google in two years.
But a couple years later they came out with open source implementations yeah: https://github.com/google/trax/tree/master/trax/models/reformer
-
[D] Paper Explained - Sparse is Enough in Scaling Transformers (aka Terraformer) | Video Walkthrough
Code: https://github.com/google/trax/blob/master/trax/examples/Terraformer_from_scratch.ipynb
- Why would I want to develop yet another deep learning framework?
-
How to train large models on a normal laptop?
Training language models is expensive. Train the biggest model you can afford. I assume you've tried the colab from the reformer GitHub: https://github.com/google/trax/tree/master/trax/models/reformer
What are some alternatives?
RAdam - On the Variance of the Adaptive Learning Rate and Beyond
flax - Flax is a neural network library for JAX that is designed for flexibility.
DemonRangerOptimizer - Quasi Hyperbolic Rectified DEMON Adam/Amsgrad with AdaMod, Gradient Centralization, Lookahead, iterative averaging and decorrelated Weight Decay
dm-haiku - JAX-based neural network library
muzero-general - MuZero
AdasOptimizer - ADAS is short for Adaptive Step Size, it's an optimizer that unlike other optimizers that just normalize the derivative, it fine-tunes the step size, truly making step size scheduling obsolete, achieving state-of-the-art training performance
extending-jax - Extending JAX with custom C++ and CUDA code
dnn_from_scratch - A high level deep learning library for Convolutional Neural Networks,GANs and more, made from scratch(numpy/cupy implementation).
objax
flaxOptimizers - A collection of optimizers, some arcane others well known, for Flax.
numpyro - Probabilistic programming with NumPy powered by JAX for autograd and JIT compilation to GPU/TPU/CPU.