trax
hn-search
trax | hn-search | |
---|---|---|
7 | 1,627 | |
7,957 | 524 | |
0.4% | 0.2% | |
4.7 | 2.9 | |
3 months ago | 6 months ago | |
Python | TypeScript | |
Apache License 2.0 | GNU General Public License v3.0 or later |
Stars - the number of stars that a project has on GitHub. Growth - month over month growth in stars.
Activity is a relative number indicating how actively a project is being developed. Recent commits have higher weight than older ones.
For example, an activity of 9.0 indicates that a project is amongst the top 10% of the most actively developed projects that we are tracking.
trax
-
Maxtext: A simple, performant and scalable Jax LLM
Is t5x an encoder/decoder architecture?
Some more general options.
The Flax ecosystem
https://github.com/google/flax?tab=readme-ov-file
or dm-haiku
https://github.com/google-deepmind/dm-haiku
were some of the best developed communities in the Jax AI field
Perhaps the “trax” repo? https://github.com/google/trax
Some HF examples https://github.com/huggingface/transformers/tree/main/exampl...
Sadly it seems much of the work is proprietary these days, but one example could be Grok-1, if you customize the details. https://github.com/xai-org/grok-1/blob/main/run.py
-
Replit's new Code LLM was trained in 1 week
and the implementation https://github.com/google/trax/blob/master/trax/models/resea... if you are interested.
Hope you get to look into this!
-
RedPajama: Reproduction of Llama with Friendly License
Thank you for developing the pipeline and amassing considerable compute for gathering and preprocessing this dataset!
I'm not sure if this is the right place to ask about this, but could you consider training an LLM using a more advanced, sparse transformer architecture (specifically, "Terraformer" from this paper https://arxiv.org/abs/2111.12763 and this codebase https://github.com/google/trax/blob/master/trax/models/resea... by Google Brain and OpenAI)? I understand the pressure to focus on training a straightforward LLaMA replication, but of course you see that it's a legacy dense architecture which limits its inference performance. This new architecture is not just an academic curiosity but is already validated at scale by Google, providing 10x+ inference performance boost on the same hardware.
Frankly, the community's compute budget - for training and for inference - isn't infinite, and neither is the public's interest in models that do not have advantage (at least in convenience) over closed-source ones; and so we should utilize both those resources as efficiently as possible. It could be a big step forward if you trained at least LLaMA-Terraformer-7B and 13B foundation models on the whole dataset.
-
The founder of Gmail claims that ChatGPT can “kill” Google in two years.
But a couple years later they came out with open source implementations yeah: https://github.com/google/trax/tree/master/trax/models/reformer
-
[D] Paper Explained - Sparse is Enough in Scaling Transformers (aka Terraformer) | Video Walkthrough
Code: https://github.com/google/trax/blob/master/trax/examples/Terraformer_from_scratch.ipynb
- Why would I want to develop yet another deep learning framework?
-
How to train large models on a normal laptop?
Training language models is expensive. Train the biggest model you can afford. I assume you've tried the colab from the reformer GitHub: https://github.com/google/trax/tree/master/trax/models/reformer
hn-search
-
Louis Rossmann: YouTube's Legal Team sent me a letter [video]
If you see a post that ought to have been moderated but hasn't been, the likeliest explanation is that we didn't see it. You can help by flagging it or emailing us at [email protected].
https://hn.algolia.com/?dateRange=all&page=0&prefix=false&qu...
-
An Oil Price-Fixing Conspiracy Caused 27% of All Inflation in 2021
Ok, but please don't post unsubstantive comments to Hacker News.
I understand the reason for repeating these sentiments—it's the same reason why they get upvoted to the top of threads*—but repetition of this kind is what we're most trying to avoid here.
https://hn.algolia.com/?dateRange=all&page=0&prefix=false&so...
https://news.ycombinator.com/newsguidelines.html
* I've marked this one off topic now.
-
Validating app for manufacturers enhancing process reliability and efficiency
I was looking for it in the guidelines. There are a couple of conventions for postings. Consider a bit of prior examples: [https://hn.algolia.com/?q=show+hn]
-
Show HN: Hacker Search – A semantic search engine for Hacker News
yeah there are only three stories coming up from the site search
https://hn.algolia.com/?q=postgres+clustering
only one is semanthically correct, the other pick up the wrong version of clustering (i.e. k-means instead of multi master writes)
but yeah if one doesn't test the hard cases, how does one know it preserves semantics :D
- Longevity of Recordable CDs, DVDs and Blu-Rays
-
The Scientific Method Part 5: Illusions, Delusions, and Dreams
Like dismissing the work of Feyerabend or Wittgenstein without seemingly having read either:
https://hn.algolia.com/?dateRange=pastMonth&page=0&prefix=tr...
-
Any Google Analytics Alternatives?
https://hn.algolia.com/?dateRange=all&page=0&prefix=true&que...
-
Russian GRU was behind the attack in Vrbětice, NCOZ confirms
If it's not [flagged], there's no flagging and hence also no flagging ring. baybal2 has been banned on and off for years now https://hn.algolia.com/?dateRange=all&page=0&prefix=false&qu...
-
Gary Killdall, creator of CP/M, wrote Pixar's original 3D renderer [pdf]
The submitted title was "Gary Killdall, creator of CP/M, wrote Pixar's original 3D renderer".
Submitters: If you want to say what you think is important about an article, that's fine, but do it by adding a comment to the thread. Then your view will be on a level playing field with everyone else's: https://hn.algolia.com/?dateRange=all&page=0&prefix=false&so...
(From https://news.ycombinator.com/newsguidelines.html: "Please use the original title, unless it is misleading or linkbait; don't editorialize.")
-
Nearsightedness is at epidemic levels – and the problem begins in childhood
Vision therapy for myopia helps some people, but not everyone, likely due to genetic and neuroplasticity differences, https://hn.algolia.com/?dateRange=all&page=0&prefix=false&qu.... Nevertheless, many of the principles are useful for children whose eyes and brains are still developing.
What are some alternatives?
flax - Flax is a neural network library for JAX that is designed for flexibility.
duckduckgo-locales - Translation files for <a href="https://duckduckgo.com"> </a>
dm-haiku - JAX-based neural network library
v - Simple, fast, safe, compiled language for developing maintainable software. Compiles itself in <1s with zero library dependencies. Supports automatic C => V translation. https://vlang.io
muzero-general - MuZero
parser - 📜 Extract meaningful content from the chaos of a web page
ML-Optimizers-JAX - Toy implementations of some popular ML optimizers using Python/JAX
readability - A standalone version of the readability lib
extending-jax - Extending JAX with custom C++ and CUDA code
yq - Command-line YAML, XML, TOML processor - jq wrapper for YAML/XML/TOML documents
objax
milkdown - 🍼 Plugin driven WYSIWYG markdown editor framework.