text-to-text-transfer-transformer VS t5x

Compare text-to-text-transfer-transformer vs t5x and see what are their differences.

text-to-text-transfer-transformer

Code for the paper "Exploring the Limits of Transfer Learning with a Unified Text-to-Text Transformer" (by google-research)
InfluxDB - Power Real-Time Data Analytics at Scale
Get real-time insights from all types of time series data with InfluxDB. Ingest, query, and analyze billions of data points in real-time with unbounded cardinality.
www.influxdata.com
featured
SaaSHub - Software Alternatives and Reviews
SaaSHub helps you find the best software and product alternatives
www.saashub.com
featured
text-to-text-transfer-transformer t5x
29 7
5,909 2,491
1.1% 1.8%
5.0 8.5
3 months ago 6 days ago
Python Python
Apache License 2.0 Apache License 2.0
The number of mentions indicates the total number of mentions that we've tracked plus the number of user suggested alternatives.
Stars - the number of stars that a project has on GitHub. Growth - month over month growth in stars.
Activity is a relative number indicating how actively a project is being developed. Recent commits have higher weight than older ones.
For example, an activity of 9.0 indicates that a project is amongst the top 10% of the most actively developed projects that we are tracking.

text-to-text-transfer-transformer

Posts with mentions or reviews of text-to-text-transfer-transformer. We have used some of these posts to build our list of alternatives and similar projects. The last one was on 2024-02-21.

t5x

Posts with mentions or reviews of t5x. We have used some of these posts to build our list of alternatives and similar projects. The last one was on 2024-04-23.
  • Maxtext: A simple, performant and scalable Jax LLM
    10 projects | news.ycombinator.com | 23 Apr 2024
    [3]: https://github.com/google-research/t5x

    Asking because I have worked extensively on training a large model on a TPU cluster, and started with Levanter, then tried MaxText, and finally ended up on EasyLM. My thoughts are:

    - Levanter is well intentioned but is unproven and lacking in features. For instance, their sharding is odd in that it requires embedding dimension to be a multiple of the number of devices, so I can't test using a model with embedding dimension 768 on a 512-device pod. Lost confidence in Levanter after finding some glaring correctness bugs (and helping get them fixed). Also, while I'm a huge fan of Equinox's approach, it's sadly underdeveloped (for instance, there's no way to specify non-default weight initialization strategies without manually doing model surgery to set weights).

    - MaxText was just very difficult to work with. We felt like we were fighting against it every time we needed to change something because we would be digging through numerous needless layers of abstraction. My favorite was after one long day of debugging, I found a function who's only purpose was to pass its arguments to another function untouched; this function's only purpose was to pass its arguments untouched to a new, third function, that then slightly changed them and passed them to a fourth function that did the work.

    - EasyLM is, as the name says, easy. But on a deeper dive, the sharding functionality seems to be underdeveloped. What they call "FSDP" is not necessarily true FSDP, it's literally just a certain axis that the JAX mesh is being sharded around that happens to shard some data axes and some model weight axes.

    I'm still searching for a "perfect" JAX LLM codebase - any pointers?

  • Mixtral of Experts
    4 projects | news.ycombinator.com | 11 Dec 2023
    > Are you using a normal training script i.e. "continued pretraining" on ALL parameters with just document fragments rather than input output pairs?

    Yes, this one.

    > do you make a custom dataset that has qa pairs about that particular knowledgebase?

    This one. Once you have a checkpoint w knowledge, it makes sense to finetune. You can use either LORA or PEFT. We do it depending on the case. (some orgs have like millions of tokens and i am not that confident that PEFT).

    LoRA with raw document text may not work, haven't tried that. Google has a good example of training scripts here: https://github.com/google-research/t5x (under training. and then finetuning). I like this one. Facebook Research also has a few on their repo.

    If you are just looking to scrape by, I would suggest just do what they tell you to do. You can offer suggestions, but better let them take the call. A lot of fluff, a lot of chatter online, so everyone is figuring out stuff.

    One note about pretraining is that it is costly, so most OSS devs just do direct finetuning/LoRA. Works because their dataset is from the open internet. Orgs aren't finding much value with these. And yet, many communities are filled with these tactics.

  • Mixtures of Experts
    2 projects | news.ycombinator.com | 9 Oct 2023
    Google have released the models and code for the Switch Transformer from Fedus et al. (2021) under the Apache 2.0 licence. [0]

    There's also OpenMoE - an open-source effort to train a mixture of experts model. Currently they've released a model with 8 billion parameters. [1]

    [0] https://github.com/google-research/t5x/blob/main/docs/models...

    [1] https://github.com/XueFuzhao/OpenMoE

  • [D] ClosedAI license, open-source license which restricts only OpenAI, Microsoft, Google, and Meta from commercial use
    5 projects | /r/MachineLearning | 7 May 2023
  • [P] T5 Implementation in PyTorch
    3 projects | /r/MachineLearning | 4 Jan 2023
    You can find the official T5x repository by Google AI here: https://github.com/google-research/t5x
  • Google AI Introduces Confident Adaptive Language Modeling (CALM) For 3x Faster Text Generation With Language Models (LMs)
    1 project | /r/machinelearningnews | 20 Dec 2022
    Quick Read: https://www.marktechpost.com/2022/12/20/google-ai-introduces-confident-adaptive-language-modeling-calm-for-3x-faster-text-generation-with-language-models-lms/ Paper: https://arxiv.org/pdf/2207.07061.pdf Code: https://github.com/google-research/t5x/tree/main/t5x/contrib/calm
  • New free open source 20B parameter model (Not GPT Neo) achieves state-of-the-art results (SOTA) and outperforms GPT-3
    2 projects | /r/NovelAi | 12 May 2022
    From Section 9.1 in the paper, it looks like the weights in the Google buckets are associated with the T5X model(s?) here: https://github.com/google-research/t5x

What are some alternatives?

When comparing text-to-text-transfer-transformer and t5x you can also consider the following projects:

fairseq - Facebook AI Research Sequence-to-Sequence Toolkit written in Python.

google-research - Google Research

tortoise-tts - A multi-voice TTS system trained with an emphasis on quality

t5-pytorch - Implementation of Exploring the Limits of Transfer Learning with a Unified Text-to-Text Transformer in PyTorch.

DeepCreamPy - Decensoring Hentai with Deep Neural Networks

bad-licenses - A compendium of absurd open-source licenses.

dalle-mini - DALLĀ·E Mini - Generate images from a text prompt

Flux.jl - Relax! Flux is the ML library that doesn't make you tensor

latent-diffusion - High-Resolution Image Synthesis with Latent Diffusion Models

darwin-xnu - Legacy mirror of Darwin Kernel. Replaced by https://github.com/apple-oss-distributions/xnu

majesty-diffusion - Majesty Diffusion by @Dango233(@Dango233max) and @apolinario (@multimodalart)

OpenMoE - A family of open-sourced Mixture-of-Experts (MoE) Large Language Models