TextWorld
trax
TextWorld | trax | |
---|---|---|
1 | 7 | |
1,154 | 7,962 | |
0.7% | 0.4% | |
7.8 | 4.7 | |
3 months ago | 4 months ago | |
Jupyter Notebook | Python | |
GNU General Public License v3.0 or later | Apache License 2.0 |
Stars - the number of stars that a project has on GitHub. Growth - month over month growth in stars.
Activity is a relative number indicating how actively a project is being developed. Recent commits have higher weight than older ones.
For example, an activity of 9.0 indicates that a project is amongst the top 10% of the most actively developed projects that we are tracking.
TextWorld
-
A choose your own adventure writing and reading platform
I have been looking into creating something like that, but using AI to generate the stories, to add to my site at https://boredhumans.com . I already have an AI-powered story generator there, but it is not interactive. I was also looking at creating interactive stories using https://parl.ai/projects/light/ or https://github.com/microsoft/TextWorld , both of which use AI and are open source. But those are like AI Dungeon (https://play.aidungeon.io) and not regular stories.
trax
-
Maxtext: A simple, performant and scalable Jax LLM
Is t5x an encoder/decoder architecture?
Some more general options.
The Flax ecosystem
https://github.com/google/flax?tab=readme-ov-file
or dm-haiku
https://github.com/google-deepmind/dm-haiku
were some of the best developed communities in the Jax AI field
Perhaps the “trax” repo? https://github.com/google/trax
Some HF examples https://github.com/huggingface/transformers/tree/main/exampl...
Sadly it seems much of the work is proprietary these days, but one example could be Grok-1, if you customize the details. https://github.com/xai-org/grok-1/blob/main/run.py
-
Replit's new Code LLM was trained in 1 week
and the implementation https://github.com/google/trax/blob/master/trax/models/resea... if you are interested.
Hope you get to look into this!
-
RedPajama: Reproduction of Llama with Friendly License
Thank you for developing the pipeline and amassing considerable compute for gathering and preprocessing this dataset!
I'm not sure if this is the right place to ask about this, but could you consider training an LLM using a more advanced, sparse transformer architecture (specifically, "Terraformer" from this paper https://arxiv.org/abs/2111.12763 and this codebase https://github.com/google/trax/blob/master/trax/models/resea... by Google Brain and OpenAI)? I understand the pressure to focus on training a straightforward LLaMA replication, but of course you see that it's a legacy dense architecture which limits its inference performance. This new architecture is not just an academic curiosity but is already validated at scale by Google, providing 10x+ inference performance boost on the same hardware.
Frankly, the community's compute budget - for training and for inference - isn't infinite, and neither is the public's interest in models that do not have advantage (at least in convenience) over closed-source ones; and so we should utilize both those resources as efficiently as possible. It could be a big step forward if you trained at least LLaMA-Terraformer-7B and 13B foundation models on the whole dataset.
-
The founder of Gmail claims that ChatGPT can “kill” Google in two years.
But a couple years later they came out with open source implementations yeah: https://github.com/google/trax/tree/master/trax/models/reformer
-
[D] Paper Explained - Sparse is Enough in Scaling Transformers (aka Terraformer) | Video Walkthrough
Code: https://github.com/google/trax/blob/master/trax/examples/Terraformer_from_scratch.ipynb
- Why would I want to develop yet another deep learning framework?
-
How to train large models on a normal laptop?
Training language models is expensive. Train the biggest model you can afford. I assume you've tried the colab from the reformer GitHub: https://github.com/google/trax/tree/master/trax/models/reformer
What are some alternatives?
evennia - Python MUD/MUX/MUSH/MU* development system
flax - Flax is a neural network library for JAX that is designed for flexibility.
Reinforcement-Learning - Learn Deep Reinforcement Learning in 60 days! Lectures & Code in Python. Reinforcement Learning + Deep Learning
dm-haiku - JAX-based neural network library
TensorFlow-Tutorials - TensorFlow Tutorials with YouTube Videos
muzero-general - MuZero
nn - 🧑🏫 60 Implementations/tutorials of deep learning papers with side-by-side notes 📝; including transformers (original, xl, switch, feedback, vit, ...), optimizers (adam, adabelief, sophia, ...), gans(cyclegan, stylegan2, ...), 🎮 reinforcement learning (ppo, dqn), capsnet, distillation, ... 🧠
ML-Optimizers-JAX - Toy implementations of some popular ML optimizers using Python/JAX
FinGPT - FinGPT: Open-Source Financial Large Language Models! Revolutionize 🔥 We release the trained model on HuggingFace.
extending-jax - Extending JAX with custom C++ and CUDA code
objax
numpyro - Probabilistic programming with NumPy powered by JAX for autograd and JIT compilation to GPU/TPU/CPU.