aitextgen VS nanoGPT

Compare aitextgen vs nanoGPT and see what are their differences.

aitextgen

A robust Python tool for text-based AI training and generation using GPT-2. (by minimaxir)

nanoGPT

The simplest, fastest repository for training/finetuning medium-sized GPTs. (by karpathy)
InfluxDB - Power Real-Time Data Analytics at Scale
Get real-time insights from all types of time series data with InfluxDB. Ingest, query, and analyze billions of data points in real-time with unbounded cardinality.
www.influxdata.com
featured
SaaSHub - Software Alternatives and Reviews
SaaSHub helps you find the best software and product alternatives
www.saashub.com
featured
aitextgen nanoGPT
19 69
1,826 31,914
- -
1.8 5.4
10 months ago about 1 month ago
Python Python
MIT License MIT License
The number of mentions indicates the total number of mentions that we've tracked plus the number of user suggested alternatives.
Stars - the number of stars that a project has on GitHub. Growth - month over month growth in stars.
Activity is a relative number indicating how actively a project is being developed. Recent commits have higher weight than older ones.
For example, an activity of 9.0 indicates that a project is amongst the top 10% of the most actively developed projects that we are tracking.

aitextgen

Posts with mentions or reviews of aitextgen. We have used some of these posts to build our list of alternatives and similar projects. The last one was on 2023-06-30.
  • Where is the engineering part in "prompt engineer"?
    6 projects | /r/datascience | 30 Jun 2023
    It's literally a wrapper for the ChatGPT API (currently). I have another library for training models from scratch but haven't had time to work on it.
  • self-hosted AI?
    11 projects | /r/selfhosted | 28 Mar 2023
    I'm experimenting with https://github.com/minimaxir/aitextgen for some some simple tasks. It is pretty much a wrapper around gpt2 and gpt neox models.
  • How would I go about implementing warmup steps from the Transformers library?
    1 project | /r/learnmachinelearning | 1 Mar 2023
    I'm sorry if this is the wrong place to ask, but I wasn't sure where else to turn. Several of us have already opened an issue with AITextGen, but it seems that the maintainer isn't particularly active these days. I'm a fairly proficient developer (self-taught), and I know my way around ML, but I was not formally-educated in deep learning. A lot of Pytorch-Lightning looks like black magic, to me. I suspect that I'm missing an important detail that would be fairly simple for many of you to identify.
  • NanoGPT
    8 projects | news.ycombinator.com | 11 Jan 2023
    To train small gpt-like models, there's also aitextgen: https://github.com/minimaxir/aitextgen
  • Neuro-sama sings "Take On Me" with her Angelic Voice
    1 project | /r/LivestreamFail | 7 Jan 2023
    It's actually relatively easy to train your own GPT model and there are multiple tools out there that make it almost just plug and play: https://github.com/minimaxir/aitextgen
  • Is there a place with all the models indexed?
    1 project | /r/StableDiffusion | 29 Oct 2022
    I've been learning python and for the past few days, I've been playing around with the aitextgen library.
  • I built an AI model to auto-generate Dominion cards. Here are the hilariously bad results.
    1 project | /r/dominion | 27 Sep 2022
    Then I ran that through the ai and got it to spit out cards that looked like that training data. I used aitextgen. So I let it run for like 4 hours and it thinks it has made 10,000 rows of cards. But some of these cards are duplicates to each other or to cards that already exist, or use a card name that already exists in the original game, or have like 20 '|' characters in one row, or have zero '|'. So I run a script to remove all of these cards like that, and I end up with like 2,000-4,500 cards that are "functional".
  • Thoughts on GPT3?
    1 project | /r/ArtificialInteligence | 13 Jul 2022
    If you search this subreddit, you should find lots of discussions about it, as well as alternatives like GPT-J (open source). If you'd like to experiment with GPT-2 for text generation, try https://github.com/minimaxir/aitextgen. It's fun to play with.
  • Show HN: Tensorpedia – Using GPT-2 to synthesize Wikipedia articles
    1 project | news.ycombinator.com | 13 Jan 2022
    Hey HN! I've been lurking for a while now and I've finally created something that I feel is worth sharing.

    I've called this project "Tensorpedia." At its core, Tensorpedia takes in a title and utilizes it as a prompt for GPT-2 to synthesize the introductory part of a Wikipedia article. The machine learning stuff is written using a wonderful library called aitextgen [0], using Wikipedia's "Vital Articles" as a data set [1]. The server is written in Node, and it uses Redis as an article cache. If you want to read my article about it (for some reason), you can check it out here [2].

    I created this project to get more experience with server technologies. While I wouldn't say it's a complicated application, I learned quite a lot from it.

    Additionally, as I was inspired by all of those this-x-doesn't-exist projects from a while back, this project is mostly for fun. As such, I don't know how much practical use it has, but I've generated some pretty hilarious articles from it.

    [0] https://github.com/minimaxir/aitextgen

    [1] https://en.wikipedia.org/wiki/Wikipedia:Vital_articles/Level...

    [2] https://jonahsussman.net/posts/2022-01-this-wiki-dne/

  • Downloaded GPT-2, Encode.py, and Train.py not found.
    2 projects | /r/GPT3 | 8 Jan 2022
    If by downloaded you mean clone the gpt-2 github repo it doesn't come with those scripts. I personally played around with https://github.com/minimaxir/aitextgen which is a simple wrapper around the gpt-2 code, it comes with some very clear usage. (Shout out to minimaxir and everyone else involved in aitextgen for making using gpt-2 easy to use!)

nanoGPT

Posts with mentions or reviews of nanoGPT. We have used some of these posts to build our list of alternatives and similar projects. The last one was on 2024-03-01.
  • Show HN: Predictive Text Using Only 13KB of JavaScript. No LLM
    3 projects | news.ycombinator.com | 1 Mar 2024
    Nice work! I built something similar years ago and I did compile the probabilities based on a corpus of text (public domain books) in an attempt to produce writing in the style of various authors. The results were actually quite similar to the output of nanoGPT[0]. It was very unoptimized and everything was kept in memory. I also knew nothing about embeddings at the time and only a little about NLP techniques that would certainly have helped. Using a graph database would have probably been better than the datastructure I came up with at the time. You should look into stuff like Datalog, Tries[1], and N-Triples[2] for more inspiration.

    You're idea of splitting the probabilities based on whether you're starting the sentence or finishing it is interesting but you might be able to benefit from an approach that creates a "window" of text you can use for lookup, using an LCS[3] algorithm could do that. There's probably a lot of optimization you could do based on the probabilities of different sequences, I think this was the fundamental thing I was exploring in my project.

    Seeing this has inspired me further to consider working on that project again at some point.

    [0] https://github.com/karpathy/nanoGPT

    [1] https://en.wikipedia.org/wiki/Trie

    [2] https://en.wikipedia.org/wiki/N-Triples

    [3] https://en.wikipedia.org/wiki/Longest_common_subsequence

  • LLMs Learn to Be "Generative"
    1 project | news.ycombinator.com | 4 Feb 2024
    where x1 denotes the 1st token, x2 denotes the 2nd token and so on, respectively.

    I understand the conditional terms p(x_n|...) where we use cross-entropy to calculate their losses. However, I'm unsure about the probability of the very first token p(x1). How is it calculated? Is it in some configurations of the training process, or in the model architecture, or in the loss function?

    IMHO, if the model doesn't learn p(x1) properly, the entire formula for Bayes' rule cannot be completed, and we can't refer to LLMs as "truly generative". Am I missing something here?

    I asked the same question on nanoGPT repo: https://github.com/karpathy/nanoGPT/issues/432, but I haven't found the answer I'm looking for yet. Could someone please enlighten me.

  • A simulation of me: fine-tuning an LLM on 240k text messages
    2 projects | news.ycombinator.com | 4 Jan 2024
    This repo, albeit "old" in regards to how much progress there's been in LLMs, has great simple tutorials right there eg. fine-tuning GPT2 with Shakespeare: https://github.com/karpathy/nanoGPT
  • Ask HN: Is it feasible to train my own LLM?
    3 projects | news.ycombinator.com | 2 Jan 2024
    For training from scratch, maybe a small model like https://github.com/karpathy/nanoGPT or tinyllama. Perhaps with quantization.
  • Writing a C compiler in 500 lines of Python
    4 projects | news.ycombinator.com | 4 Sep 2023
    It does remind me of a project [1] Andrej Karpathy did, writing a neural network and training code in ~600 lines (although networks have easier logic to code than a compiler).

    [1] https://github.com/karpathy/nanoGPT

  • [D] Can GPT "understand"?
    1 project | /r/MachineLearning | 20 Aug 2023
    But I'm still not convinced that it can't in theory. Maybe the training set or transformer size I'm using is too small. I'm using nanoGPT implementation (https://github.com/karpathy/nanoGPT) with layers 24, heads 12, and embeddings per head 32. I'm using character-based vocab: every digit is a separate token, +, = and EOL.
  • Transformer Attention is off by one
    4 projects | news.ycombinator.com | 24 Jul 2023
    https://github.com/karpathy/nanoGPT/blob/f08abb45bd2285627d1...

    At training time, probabilities for the next token are computed for each position, so if we feed in a sequence of n tokens, we basically get n training examples, one for each position, but at inference time, we only compute the next token since we’ve already output the preceding ones.

  • Sarah Silverman Sues ChatGPT Creator for Copyright Infringement
    1 project | /r/books | 10 Jul 2023
    And there are a bunch of other efforts at making training more efficient. Here's a cool model by Karpathy (OpenAI/used to head up Tesla's efforts): https://github.com/karpathy/nanoGPT
  • Douglas Hofstadter changes his mind on Deep Learning and AI risk
    2 projects | news.ycombinator.com | 3 Jul 2023
    Just being a part of any auto-regressive system does not contradict his statement.

    Go look at the GPT training code, here is the exact line: https://github.com/karpathy/nanoGPT/blob/master/train.py#L12...

    The model is only trained to predict the next token. The training regime is purely next-token prediction. There is no loopiness whatsoever here, strange or ordinary.

    Just because you take that feedforward neural network and wrap it in a loop to feed it its own output does not change the architecture of the neural net itself. The neural network was trained in one direction and runs in one direction. Hofstadter is surprised that such an architecture yields something that looks like intelligence.

    He specifically used the correct term "feedforward" to constrast with recurrent neural networks, which GPT is not: https://en.wikipedia.org/wiki/Feedforward_neural_network

  • NTK-Aware Scaled RoPE allows LLaMA models to have extended (8k+) context size without any fine-tuning and minimal perplexity degradation.
    1 project | /r/LocalLLaMA | 30 Jun 2023
    Does anyone have or know of an example implementation in plain pytorch, not huggingface transformers. Like something you could plug into https://github.com/karpathy/nanoGPT ?

What are some alternatives?

When comparing aitextgen and nanoGPT you can also consider the following projects:

lm-evaluation-harness - A framework for few-shot evaluation of language models.

minGPT - A minimal PyTorch re-implementation of the OpenAI GPT (Generative Pretrained Transformer) training

DiscordChatAI-GPT2 - A chat AI discord bot written in python3 using GPT-2, trained on data scraped from every message of my discord server (can be trained on yours too)

RWKV-LM - RWKV is an RNN with transformer-level LLM performance. It can be directly trained like a GPT (parallelizable). So it's combining the best of RNN and transformer - great performance, fast inference, saves VRAM, fast training, "infinite" ctx_len, and free sentence embedding.

gpt-neo - An implementation of model parallel GPT-2 and GPT-3-style models using the mesh-tensorflow library.

PaLM-rlhf-pytorch - Implementation of RLHF (Reinforcement Learning with Human Feedback) on top of the PaLM architecture. Basically ChatGPT but with PaLM

transformers - 🤗 Transformers: State-of-the-art Machine Learning for Pytorch, TensorFlow, and JAX.

ChatGPT - 🔮 ChatGPT Desktop Application (Mac, Windows and Linux)

trump_gpt2_bot - aitextgen (aka GPT-2) Twitter bot

nn-zero-to-hero - Neural Networks: Zero to Hero

gpt4all - gpt4all: run open-source LLMs anywhere

gpt_index - LlamaIndex (GPT Index) is a project that provides a central interface to connect your LLM's with external data. [Moved to: https://github.com/jerryjliu/llama_index]