trax
muzero-general
trax | muzero-general | |
---|---|---|
7 | 14 | |
7,957 | 2,382 | |
0.4% | - | |
4.7 | 0.0 | |
3 months ago | 4 months ago | |
Python | Python | |
Apache License 2.0 | MIT License |
Stars - the number of stars that a project has on GitHub. Growth - month over month growth in stars.
Activity is a relative number indicating how actively a project is being developed. Recent commits have higher weight than older ones.
For example, an activity of 9.0 indicates that a project is amongst the top 10% of the most actively developed projects that we are tracking.
trax
-
Maxtext: A simple, performant and scalable Jax LLM
Is t5x an encoder/decoder architecture?
Some more general options.
The Flax ecosystem
https://github.com/google/flax?tab=readme-ov-file
or dm-haiku
https://github.com/google-deepmind/dm-haiku
were some of the best developed communities in the Jax AI field
Perhaps the “trax” repo? https://github.com/google/trax
Some HF examples https://github.com/huggingface/transformers/tree/main/exampl...
Sadly it seems much of the work is proprietary these days, but one example could be Grok-1, if you customize the details. https://github.com/xai-org/grok-1/blob/main/run.py
-
Replit's new Code LLM was trained in 1 week
and the implementation https://github.com/google/trax/blob/master/trax/models/resea... if you are interested.
Hope you get to look into this!
-
RedPajama: Reproduction of Llama with Friendly License
Thank you for developing the pipeline and amassing considerable compute for gathering and preprocessing this dataset!
I'm not sure if this is the right place to ask about this, but could you consider training an LLM using a more advanced, sparse transformer architecture (specifically, "Terraformer" from this paper https://arxiv.org/abs/2111.12763 and this codebase https://github.com/google/trax/blob/master/trax/models/resea... by Google Brain and OpenAI)? I understand the pressure to focus on training a straightforward LLaMA replication, but of course you see that it's a legacy dense architecture which limits its inference performance. This new architecture is not just an academic curiosity but is already validated at scale by Google, providing 10x+ inference performance boost on the same hardware.
Frankly, the community's compute budget - for training and for inference - isn't infinite, and neither is the public's interest in models that do not have advantage (at least in convenience) over closed-source ones; and so we should utilize both those resources as efficiently as possible. It could be a big step forward if you trained at least LLaMA-Terraformer-7B and 13B foundation models on the whole dataset.
-
The founder of Gmail claims that ChatGPT can “kill” Google in two years.
But a couple years later they came out with open source implementations yeah: https://github.com/google/trax/tree/master/trax/models/reformer
-
[D] Paper Explained - Sparse is Enough in Scaling Transformers (aka Terraformer) | Video Walkthrough
Code: https://github.com/google/trax/blob/master/trax/examples/Terraformer_from_scratch.ipynb
- Why would I want to develop yet another deep learning framework?
-
How to train large models on a normal laptop?
Training language models is expensive. Train the biggest model you can afford. I assume you've tried the colab from the reformer GitHub: https://github.com/google/trax/tree/master/trax/models/reformer
muzero-general
-
Open source rules engine for Magic: The Gathering
I went looking for MuZero implementations in order to see how, exactly, they interact with the game space. Based on this one, which had the most stars in the muzero topic, it appears that it needs to be able to discern legal next steps from the current game state https://github.com/werner-duvaud/muzero-general/blob/master/...
So, I guess for the cards Forge has implemented one could MuZero it, but I believe it's a bit chicken and egg with a "free text" game like M:TG -- in order to train one would need to know legal steps for any random game state, but in order to have legal steps one would need to be able to read and interpret English rules and card text
- I placed Stockfish (white) against ChatGPT (black). Here's how the game went.
- Ask HN: What interesting problems are you working on? ( 2022 Edition)
-
How to "fit" the output of the Critic to the dimension of the reward?
You may want to use the trick described in https://arxiv.org/pdf/1805.11593.pdf as a Transformed Bellman Operator. Its efficiency is proved in MuZero original paper https://arxiv.org/pdf/1911.08265.pdf Appendix F. The implementation of that method you can find here: https://github.com/werner-duvaud/muzero-general Usage: muzero/models.py:649 (def support_to_scalar)
-
MuZero unable to solve non-slippery FrozenLake environment?
I have used this implementation from MuZero: https://github.com/werner-duvaud/muzero-general
-
RL for chess
+1 to taking a look at OpenSpiel. It has AlphaZero in C++ and Python, and there is even a PR open that allows running UCI (e.g. Stockfish) bot. You can also load chess via the OpenSpiel wrapper in muzero-general: https://github.com/werner-duvaud/muzero-general
-
The future of MuZero, and where to go for news
When I looked up some community implementations, like that of Werner Duvaud on GitHub and Discord, hoping to make my own contributions to this effect, I soon found that I was hopelessly out of my depth as an amateur programmer, even with the help of some other sources like this walkthrough series. However, from what I could tell, most of the people working on this sort of thing seemed to be tackling relatively simple games. At first I thought this might be largely due to limitations in hobby time or computing power available to these users, but then I also noticed that, unless I have misunderstood something, it seems like the games are required to be rebuilt entirely in the engine of (this implementation of) MuZero, which would also obviously be a limit on the complexity of games chosen.
- Is MuZero currently the best RL algo that we have now?
-
"muzero-general", PyTorch/Ray code for Gym/Atari/board-games (reasonable results + checkpoints for small tasks)
Windows support (Experimental / Workaround: Use the notebook in Google Colab)
-
Muzero code implementation
There are several if you google "muzero github", e.g. https://github.com/werner-duvaud/muzero-general
What are some alternatives?
flax - Flax is a neural network library for JAX that is designed for flexibility.
deep-RL-trading - playing idealized trading games with deep reinforcement learning
dm-haiku - JAX-based neural network library
Super-mario-bros-PPO-pytorch - Proximal Policy Optimization (PPO) algorithm for Super Mario Bros
ML-Optimizers-JAX - Toy implementations of some popular ML optimizers using Python/JAX
alpha-zero-general - A clean implementation based on AlphaZero for any game in any framework + tutorial + Othello/Gobang/TicTacToe/Connect4 and more
extending-jax - Extending JAX with custom C++ and CUDA code
open_spiel - OpenSpiel is a collection of environments and algorithms for research in general reinforcement learning and search/planning in games.
objax
stable-baselines3-contrib - Contrib package for Stable-Baselines3 - Experimental reinforcement learning (RL) code
numpyro - Probabilistic programming with NumPy powered by JAX for autograd and JIT compilation to GPU/TPU/CPU.
pytorch-ddpg - Deep deterministic policy gradient (DDPG) in PyTorch 🚀