trax | IF | |
---|---|---|
7 | 43 | |
7,957 | 7,512 | |
0.4% | 0.6% | |
4.7 | 4.2 | |
3 months ago | 22 days ago | |
Python | Python | |
Apache License 2.0 | GNU General Public License v3.0 or later |
Stars - the number of stars that a project has on GitHub. Growth - month over month growth in stars.
Activity is a relative number indicating how actively a project is being developed. Recent commits have higher weight than older ones.
For example, an activity of 9.0 indicates that a project is amongst the top 10% of the most actively developed projects that we are tracking.
trax
-
Maxtext: A simple, performant and scalable Jax LLM
Is t5x an encoder/decoder architecture?
Some more general options.
The Flax ecosystem
https://github.com/google/flax?tab=readme-ov-file
or dm-haiku
https://github.com/google-deepmind/dm-haiku
were some of the best developed communities in the Jax AI field
Perhaps the “trax” repo? https://github.com/google/trax
Some HF examples https://github.com/huggingface/transformers/tree/main/exampl...
Sadly it seems much of the work is proprietary these days, but one example could be Grok-1, if you customize the details. https://github.com/xai-org/grok-1/blob/main/run.py
-
Replit's new Code LLM was trained in 1 week
and the implementation https://github.com/google/trax/blob/master/trax/models/resea... if you are interested.
Hope you get to look into this!
-
RedPajama: Reproduction of Llama with Friendly License
Thank you for developing the pipeline and amassing considerable compute for gathering and preprocessing this dataset!
I'm not sure if this is the right place to ask about this, but could you consider training an LLM using a more advanced, sparse transformer architecture (specifically, "Terraformer" from this paper https://arxiv.org/abs/2111.12763 and this codebase https://github.com/google/trax/blob/master/trax/models/resea... by Google Brain and OpenAI)? I understand the pressure to focus on training a straightforward LLaMA replication, but of course you see that it's a legacy dense architecture which limits its inference performance. This new architecture is not just an academic curiosity but is already validated at scale by Google, providing 10x+ inference performance boost on the same hardware.
Frankly, the community's compute budget - for training and for inference - isn't infinite, and neither is the public's interest in models that do not have advantage (at least in convenience) over closed-source ones; and so we should utilize both those resources as efficiently as possible. It could be a big step forward if you trained at least LLaMA-Terraformer-7B and 13B foundation models on the whole dataset.
-
The founder of Gmail claims that ChatGPT can “kill” Google in two years.
But a couple years later they came out with open source implementations yeah: https://github.com/google/trax/tree/master/trax/models/reformer
-
[D] Paper Explained - Sparse is Enough in Scaling Transformers (aka Terraformer) | Video Walkthrough
Code: https://github.com/google/trax/blob/master/trax/examples/Terraformer_from_scratch.ipynb
- Why would I want to develop yet another deep learning framework?
-
How to train large models on a normal laptop?
Training language models is expensive. Train the biggest model you can afford. I assume you've tried the colab from the reformer GitHub: https://github.com/google/trax/tree/master/trax/models/reformer
IF
-
Google Imagen 2
Stability AI has gaps in SDXL for text, but they seem to do a better job with Deep Floyd ( https://github.com/deep-floyd/IF ). I have done a lot of interesting text things with Deep Floyd
-
SDXL Release Date: July 18th
They made https://github.com/deep-floyd/IF IF model which has a stronger understanding of text.
-
Which model is good with image + text generation?
https://github.com/deep-floyd/IF It's a bit difficult to install right now, I can't install it on my card unfortunately. They had a huggingface space you could play with it but it seems offline right now. Hopefully it'll be incorporated into Auto1111 eventually.
-
Best AI for generating media covers (mixture of imagery and text)
I'm pretty sure this is the best open model for image + text: https://github.com/deep-floyd/IF
-
Stability AI Launches Stable Diffusion XL 0.9
Text will be better due to simple scale, but the text will still be limited due to the use of a CLIP for text encoding (BPEs+contrastive). So that may be SD XL 0.9 but it should still be worse due to not using T5 like https://github.com/deep-floyd/IF
- Comparing Adobe Firefly, Dalle-2, and OpenJourney
-
Model that can do text?
Ate you thinking of DeepFloyd IF? https://github.com/deep-floyd/IF
-
SDXL beta test: prompt is Star Trek in the style of Dr. Seuss
Or maybe people are conflating it with Deep Floyd, another SAI model: https://github.com/deep-floyd/IF
-
The stairwell in this hotel goes straight forward (13 floors) instead of wrapping around.
Deep Floyd IF does text really well and is out. https://github.com/deep-floyd/IF
-
Google “We Have No Moat, and Neither Does OpenAI”
use https://github.com/deep-floyd/IF, it uses LLM to generate exact art you need.
What are some alternatives?
flax - Flax is a neural network library for JAX that is designed for flexibility.
diffusers - 🤗 Diffusers: State-of-the-art diffusion models for image and audio generation in PyTorch and FLAX.
dm-haiku - JAX-based neural network library
stat4701 - Final Project
muzero-general - MuZero
mation-spec
ML-Optimizers-JAX - Toy implementations of some popular ML optimizers using Python/JAX
DeepFloyd-IF-colab
extending-jax - Extending JAX with custom C++ and CUDA code
magma-chat - Ruby on Rails 7-based ChatGPT Bot Platform
objax
hate-speech-project