trax VS text-generation-webui

Compare trax vs text-generation-webui and see what are their differences.

text-generation-webui

A Gradio web UI for Large Language Models. Supports transformers, GPTQ, AWQ, EXL2, llama.cpp (GGUF), Llama models. (by oobabooga)
InfluxDB - Power Real-Time Data Analytics at Scale
Get real-time insights from all types of time series data with InfluxDB. Ingest, query, and analyze billions of data points in real-time with unbounded cardinality.
www.influxdata.com
featured
SaaSHub - Software Alternatives and Reviews
SaaSHub helps you find the best software and product alternatives
www.saashub.com
featured
trax text-generation-webui
7 876
7,957 36,552
0.4% -
4.7 9.9
3 months ago 2 days ago
Python Python
Apache License 2.0 GNU Affero General Public License v3.0
The number of mentions indicates the total number of mentions that we've tracked plus the number of user suggested alternatives.
Stars - the number of stars that a project has on GitHub. Growth - month over month growth in stars.
Activity is a relative number indicating how actively a project is being developed. Recent commits have higher weight than older ones.
For example, an activity of 9.0 indicates that a project is amongst the top 10% of the most actively developed projects that we are tracking.

trax

Posts with mentions or reviews of trax. We have used some of these posts to build our list of alternatives and similar projects. The last one was on 2024-04-23.
  • Maxtext: A simple, performant and scalable Jax LLM
    10 projects | news.ycombinator.com | 23 Apr 2024
    Is t5x an encoder/decoder architecture?

    Some more general options.

    The Flax ecosystem

    https://github.com/google/flax?tab=readme-ov-file

    or dm-haiku

    https://github.com/google-deepmind/dm-haiku

    were some of the best developed communities in the Jax AI field

    Perhaps the “trax” repo? https://github.com/google/trax

    Some HF examples https://github.com/huggingface/transformers/tree/main/exampl...

    Sadly it seems much of the work is proprietary these days, but one example could be Grok-1, if you customize the details. https://github.com/xai-org/grok-1/blob/main/run.py

  • Replit's new Code LLM was trained in 1 week
    12 projects | news.ycombinator.com | 3 May 2023
    and the implementation https://github.com/google/trax/blob/master/trax/models/resea... if you are interested.

    Hope you get to look into this!

  • RedPajama: Reproduction of Llama with Friendly License
    4 projects | news.ycombinator.com | 17 Apr 2023
    Thank you for developing the pipeline and amassing considerable compute for gathering and preprocessing this dataset!

    I'm not sure if this is the right place to ask about this, but could you consider training an LLM using a more advanced, sparse transformer architecture (specifically, "Terraformer" from this paper https://arxiv.org/abs/2111.12763 and this codebase https://github.com/google/trax/blob/master/trax/models/resea... by Google Brain and OpenAI)? I understand the pressure to focus on training a straightforward LLaMA replication, but of course you see that it's a legacy dense architecture which limits its inference performance. This new architecture is not just an academic curiosity but is already validated at scale by Google, providing 10x+ inference performance boost on the same hardware.

    Frankly, the community's compute budget - for training and for inference - isn't infinite, and neither is the public's interest in models that do not have advantage (at least in convenience) over closed-source ones; and so we should utilize both those resources as efficiently as possible. It could be a big step forward if you trained at least LLaMA-Terraformer-7B and 13B foundation models on the whole dataset.

  • The founder of Gmail claims that ChatGPT can “kill” Google in two years.
    1 project | /r/Futurology | 31 Jan 2023
    But a couple years later they came out with open source implementations yeah: https://github.com/google/trax/tree/master/trax/models/reformer
  • [D] Paper Explained - Sparse is Enough in Scaling Transformers (aka Terraformer) | Video Walkthrough
    1 project | /r/MachineLearning | 1 Dec 2021
    Code: https://github.com/google/trax/blob/master/trax/examples/Terraformer_from_scratch.ipynb
  • Why would I want to develop yet another deep learning framework?
    4 projects | /r/learnmachinelearning | 16 Sep 2021
  • How to train large models on a normal laptop?
    1 project | /r/LanguageTechnology | 14 Feb 2021
    Training language models is expensive. Train the biggest model you can afford. I assume you've tried the colab from the reformer GitHub: https://github.com/google/trax/tree/master/trax/models/reformer

text-generation-webui

Posts with mentions or reviews of text-generation-webui. We have used some of these posts to build our list of alternatives and similar projects. The last one was on 2024-04-01.
  • Ask HN: What is the current (Apr. 2024) gold standard of running an LLM locally?
    11 projects | news.ycombinator.com | 1 Apr 2024
    Some of the tools offer a path to doing tool use (fetching URLs and doing things with them) or RAG (searching your documents). I think Oobabooga https://github.com/oobabooga/text-generation-webui offers the latter through plugins.

    Our tool, https://github.com/transformerlab/transformerlab-app also supports the latter (document search) using local llms.

  • Ask HN: How to get started with local language models?
    6 projects | news.ycombinator.com | 17 Mar 2024
    You can use webui https://github.com/oobabooga/text-generation-webui

    Once you get a version up and running I make a copy before I update it as several times updates have broken my working version and caused headaches.

    a decent explanation of parameters outside of reading archive papers: https://github.com/oobabooga/text-generation-webui/wiki/03-%...

    a news ai website:

  • text-generation-webui VS LibreChat - a user suggested alternative
    2 projects | 29 Feb 2024
  • Show HN: I made an app to use local AI as daily driver
    31 projects | news.ycombinator.com | 27 Feb 2024
  • Ask HN: People who switched from GPT to their own models. How was it?
    3 projects | news.ycombinator.com | 26 Feb 2024
    The other answers are recommending paths which give you #1. less control and #2. projects with smaller eco-systems.

    If you want a truly general purpose front-end for LLMs, the only good solution right now is oobabooga: https://github.com/oobabooga/text-generation-webui

    All other alternatives have only small fractions of the features that oobabooga supports. All other alternatives only support a fraction of the LLM backends that oobabooga supports, etc.

  • AI Girlfriend Is a Data-Harvesting Horror Show
    1 project | news.ycombinator.com | 14 Feb 2024
    The example waifu in text-generation-webui is good enough for me.

    https://github.com/oobabooga/text-generation-webui/blob/main...

  • Nvidia's Chat with RTX is a promising AI chatbot that runs locally on your PC
    7 projects | news.ycombinator.com | 13 Feb 2024
    > Downloading text-generation-webui takes a minute, let's you use any model and get going.

    What you're missing here is you're already in this area deep enough to know what ooogoababagababa text-generation-webui is. Let's back out to the "average Windows desktop user" level. Assuming they even know how to find it:

    1) Go to https://github.com/oobabooga/text-generation-webui?tab=readm...

    2) See a bunch of instructions opening a terminal window and running random batch/powershell scripts. Powershell, etc will likely prompt you with a scary warning. Then you start wondering who ooobabagagagaba is...

    3) Assuming you get this far (many users won't even get to step 1) you're greeted with a web interface[0] FILLED to the brim with technical jargon and extremely overwhelming options just to get a model loaded, which is another mind warp because you get to try to select between a bunch of random models with no clear meaning and non-sensical/joke sounding names from someone called "TheBloke". Ok...

    Let's say you somehow braved this gauntlet and get this far now you get to chat with it. Ok, what about my local documents? text-generation-webui itself has nothing for that. Repeat this process over the 10 random open source projects from a bunch of names you've never heard of in an attempt to accomplish that.

    This is "I saw this thing from Nvidia explode all over media, twitter, youtube, etc. I downloaded it from Nvidia, double-clicked, pointed it at a folder with documents, and it works".

    That's the difference and it's very significant.

    [0] - https://raw.githubusercontent.com/oobabooga/screenshots/main...

  • Ask HN: What are your top 3 coolest software engineering tools?
    1 project | news.ycombinator.com | 6 Feb 2024
    Maybe a copout answer, but setting up a local LLM on my development machine has been invaluable. I use Deep Seek Coder 6.7 [0] and Oobabooga's UI [1]. It helps me solve simple problems and find bugs, while still leaving the larger architecture decisions to me.

    [0] https://huggingface.co/deepseek-ai/deepseek-coder-6.7b-instr...

    [1] https://github.com/oobabooga/text-generation-webui

  • Meta AI releases Code Llama 70B
    6 projects | news.ycombinator.com | 29 Jan 2024
    You can download it and run it with [this](https://github.com/oobabooga/text-generation-webui). There's an API mode that you could leverage from your VS Code extension.
  • Ollama Python and JavaScript Libraries
    17 projects | news.ycombinator.com | 24 Jan 2024
    Same question here. Ollama is fantastic as it makes it very easy to run models locally, But if you already have a lot of code that processes OpenAI API responses (with retry, streaming, async, caching etc), it would be nice to be able to simply switch the API client to Ollama, without having to have a whole other branch of code that handles Alama API responses. One way to do an easy switch is using the litellm library as a go-between but it’s not ideal (and I also recently found issues with their chat formatting for mistral models).

    For an OpenAI compatible API my current favorite method is to spin up models using oobabooga TGW. Your OpenAI API code then works seamlessly by simply switching out the api_base to the ooba endpoint. Regarding chat formatting, even ooba’s Mistral formatting has issues[1] so I am doing my own in Langroid using HuggingFace tokenizer.apply_chat_template [2]

    [1] https://github.com/oobabooga/text-generation-webui/issues/53...

    [2] https://github.com/langroid/langroid/blob/main/langroid/lang...

    Related question - I assume ollama auto detects and applies the right chat formatting template for a model?

What are some alternatives?

When comparing trax and text-generation-webui you can also consider the following projects:

flax - Flax is a neural network library for JAX that is designed for flexibility.

KoboldAI - KoboldAI is generative AI software optimized for fictional use, but capable of much more!

dm-haiku - JAX-based neural network library

llama.cpp - LLM inference in C/C++

muzero-general - MuZero

gpt4all - gpt4all: run open-source LLMs anywhere

ML-Optimizers-JAX - Toy implementations of some popular ML optimizers using Python/JAX

TavernAI - Atmospheric adventure chat for AI language models (KoboldAI, NovelAI, Pygmalion, OpenAI chatgpt, gpt-4)

extending-jax - Extending JAX with custom C++ and CUDA code

KoboldAI-Client

objax

ollama - Get up and running with Llama 3, Mistral, Gemma, and other large language models.