t5x

By google-research

T5x Alternatives

Similar projects and alternatives to t5x

NOTE: The number of mentions on this list indicates mentions on common posts plus user suggested alternatives. Hence, a higher number means a better t5x alternative or higher similarity.

t5x reviews and mentions

Posts with mentions or reviews of t5x. We have used some of these posts to build our list of alternatives and similar projects. The last one was on 2024-04-23.
  • Maxtext: A simple, performant and scalable Jax LLM
    10 projects | news.ycombinator.com | 23 Apr 2024
    [3]: https://github.com/google-research/t5x

    Asking because I have worked extensively on training a large model on a TPU cluster, and started with Levanter, then tried MaxText, and finally ended up on EasyLM. My thoughts are:

    - Levanter is well intentioned but is unproven and lacking in features. For instance, their sharding is odd in that it requires embedding dimension to be a multiple of the number of devices, so I can't test using a model with embedding dimension 768 on a 512-device pod. Lost confidence in Levanter after finding some glaring correctness bugs (and helping get them fixed). Also, while I'm a huge fan of Equinox's approach, it's sadly underdeveloped (for instance, there's no way to specify non-default weight initialization strategies without manually doing model surgery to set weights).

    - MaxText was just very difficult to work with. We felt like we were fighting against it every time we needed to change something because we would be digging through numerous needless layers of abstraction. My favorite was after one long day of debugging, I found a function who's only purpose was to pass its arguments to another function untouched; this function's only purpose was to pass its arguments untouched to a new, third function, that then slightly changed them and passed them to a fourth function that did the work.

    - EasyLM is, as the name says, easy. But on a deeper dive, the sharding functionality seems to be underdeveloped. What they call "FSDP" is not necessarily true FSDP, it's literally just a certain axis that the JAX mesh is being sharded around that happens to shard some data axes and some model weight axes.

    I'm still searching for a "perfect" JAX LLM codebase - any pointers?

  • Mixtral of Experts
    4 projects | news.ycombinator.com | 11 Dec 2023
    > Are you using a normal training script i.e. "continued pretraining" on ALL parameters with just document fragments rather than input output pairs?

    Yes, this one.

    > do you make a custom dataset that has qa pairs about that particular knowledgebase?

    This one. Once you have a checkpoint w knowledge, it makes sense to finetune. You can use either LORA or PEFT. We do it depending on the case. (some orgs have like millions of tokens and i am not that confident that PEFT).

    LoRA with raw document text may not work, haven't tried that. Google has a good example of training scripts here: https://github.com/google-research/t5x (under training. and then finetuning). I like this one. Facebook Research also has a few on their repo.

    If you are just looking to scrape by, I would suggest just do what they tell you to do. You can offer suggestions, but better let them take the call. A lot of fluff, a lot of chatter online, so everyone is figuring out stuff.

    One note about pretraining is that it is costly, so most OSS devs just do direct finetuning/LoRA. Works because their dataset is from the open internet. Orgs aren't finding much value with these. And yet, many communities are filled with these tactics.

  • Mixtures of Experts
    2 projects | news.ycombinator.com | 9 Oct 2023
    Google have released the models and code for the Switch Transformer from Fedus et al. (2021) under the Apache 2.0 licence. [0]

    There's also OpenMoE - an open-source effort to train a mixture of experts model. Currently they've released a model with 8 billion parameters. [1]

    [0] https://github.com/google-research/t5x/blob/main/docs/models...

    [1] https://github.com/XueFuzhao/OpenMoE

  • [D] ClosedAI license, open-source license which restricts only OpenAI, Microsoft, Google, and Meta from commercial use
    5 projects | /r/MachineLearning | 7 May 2023
  • [P] T5 Implementation in PyTorch
    3 projects | /r/MachineLearning | 4 Jan 2023
    You can find the official T5x repository by Google AI here: https://github.com/google-research/t5x
  • Google AI Introduces Confident Adaptive Language Modeling (CALM) For 3x Faster Text Generation With Language Models (LMs)
    1 project | /r/machinelearningnews | 20 Dec 2022
    Quick Read: https://www.marktechpost.com/2022/12/20/google-ai-introduces-confident-adaptive-language-modeling-calm-for-3x-faster-text-generation-with-language-models-lms/ Paper: https://arxiv.org/pdf/2207.07061.pdf Code: https://github.com/google-research/t5x/tree/main/t5x/contrib/calm
  • New free open source 20B parameter model (Not GPT Neo) achieves state-of-the-art results (SOTA) and outperforms GPT-3
    2 projects | /r/NovelAi | 12 May 2022
    From Section 9.1 in the paper, it looks like the weights in the Google buckets are associated with the T5X model(s?) here: https://github.com/google-research/t5x
  • A note from our sponsor - InfluxDB
    www.influxdata.com | 6 May 2024
    Get real-time insights from all types of time series data with InfluxDB. Ingest, query, and analyze billions of data points in real-time with unbounded cardinality. Learn more →

Stats

Basic t5x repo stats
7
2,503
8.5
4 days ago

google-research/t5x is an open source project licensed under Apache License 2.0 which is an OSI approved license.

The primary programming language of t5x is Python.


Sponsored
SaaSHub - Software Alternatives and Reviews
SaaSHub helps you find the best software and product alternatives
www.saashub.com