aitextgen VS lm-evaluation-harness

Compare aitextgen vs lm-evaluation-harness and see what are their differences.

aitextgen

A robust Python tool for text-based AI training and generation using GPT-2. (by minimaxir)

lm-evaluation-harness

A framework for few-shot evaluation of language models. (by EleutherAI)
Our great sponsors
  • InfluxDB - Power Real-Time Data Analytics at Scale
  • WorkOS - The modern identity platform for B2B SaaS
  • SaaSHub - Software Alternatives and Reviews
aitextgen lm-evaluation-harness
19 34
1,826 5,070
- 19.3%
1.8 9.9
10 months ago 1 day ago
Python Python
MIT License MIT License
The number of mentions indicates the total number of mentions that we've tracked plus the number of user suggested alternatives.
Stars - the number of stars that a project has on GitHub. Growth - month over month growth in stars.
Activity is a relative number indicating how actively a project is being developed. Recent commits have higher weight than older ones.
For example, an activity of 9.0 indicates that a project is amongst the top 10% of the most actively developed projects that we are tracking.

aitextgen

Posts with mentions or reviews of aitextgen. We have used some of these posts to build our list of alternatives and similar projects. The last one was on 2023-06-30.
  • Where is the engineering part in "prompt engineer"?
    6 projects | /r/datascience | 30 Jun 2023
    It's literally a wrapper for the ChatGPT API (currently). I have another library for training models from scratch but haven't had time to work on it.
  • self-hosted AI?
    11 projects | /r/selfhosted | 28 Mar 2023
    I'm experimenting with https://github.com/minimaxir/aitextgen for some some simple tasks. It is pretty much a wrapper around gpt2 and gpt neox models.
  • How would I go about implementing warmup steps from the Transformers library?
    1 project | /r/learnmachinelearning | 1 Mar 2023
    I'm sorry if this is the wrong place to ask, but I wasn't sure where else to turn. Several of us have already opened an issue with AITextGen, but it seems that the maintainer isn't particularly active these days. I'm a fairly proficient developer (self-taught), and I know my way around ML, but I was not formally-educated in deep learning. A lot of Pytorch-Lightning looks like black magic, to me. I suspect that I'm missing an important detail that would be fairly simple for many of you to identify.
  • NanoGPT
    8 projects | news.ycombinator.com | 11 Jan 2023
    To train small gpt-like models, there's also aitextgen: https://github.com/minimaxir/aitextgen
  • Neuro-sama sings "Take On Me" with her Angelic Voice
    1 project | /r/LivestreamFail | 7 Jan 2023
    It's actually relatively easy to train your own GPT model and there are multiple tools out there that make it almost just plug and play: https://github.com/minimaxir/aitextgen
  • Is there a place with all the models indexed?
    1 project | /r/StableDiffusion | 29 Oct 2022
    I've been learning python and for the past few days, I've been playing around with the aitextgen library.
  • I built an AI model to auto-generate Dominion cards. Here are the hilariously bad results.
    1 project | /r/dominion | 27 Sep 2022
    Then I ran that through the ai and got it to spit out cards that looked like that training data. I used aitextgen. So I let it run for like 4 hours and it thinks it has made 10,000 rows of cards. But some of these cards are duplicates to each other or to cards that already exist, or use a card name that already exists in the original game, or have like 20 '|' characters in one row, or have zero '|'. So I run a script to remove all of these cards like that, and I end up with like 2,000-4,500 cards that are "functional".
  • Thoughts on GPT3?
    1 project | /r/ArtificialInteligence | 13 Jul 2022
    If you search this subreddit, you should find lots of discussions about it, as well as alternatives like GPT-J (open source). If you'd like to experiment with GPT-2 for text generation, try https://github.com/minimaxir/aitextgen. It's fun to play with.
  • Show HN: Tensorpedia – Using GPT-2 to synthesize Wikipedia articles
    1 project | news.ycombinator.com | 13 Jan 2022
    Hey HN! I've been lurking for a while now and I've finally created something that I feel is worth sharing.

    I've called this project "Tensorpedia." At its core, Tensorpedia takes in a title and utilizes it as a prompt for GPT-2 to synthesize the introductory part of a Wikipedia article. The machine learning stuff is written using a wonderful library called aitextgen [0], using Wikipedia's "Vital Articles" as a data set [1]. The server is written in Node, and it uses Redis as an article cache. If you want to read my article about it (for some reason), you can check it out here [2].

    I created this project to get more experience with server technologies. While I wouldn't say it's a complicated application, I learned quite a lot from it.

    Additionally, as I was inspired by all of those this-x-doesn't-exist projects from a while back, this project is mostly for fun. As such, I don't know how much practical use it has, but I've generated some pretty hilarious articles from it.

    [0] https://github.com/minimaxir/aitextgen

    [1] https://en.wikipedia.org/wiki/Wikipedia:Vital_articles/Level...

    [2] https://jonahsussman.net/posts/2022-01-this-wiki-dne/

  • Downloaded GPT-2, Encode.py, and Train.py not found.
    2 projects | /r/GPT3 | 8 Jan 2022
    If by downloaded you mean clone the gpt-2 github repo it doesn't come with those scripts. I personally played around with https://github.com/minimaxir/aitextgen which is a simple wrapper around the gpt-2 code, it comes with some very clear usage. (Shout out to minimaxir and everyone else involved in aitextgen for making using gpt-2 easy to use!)

lm-evaluation-harness

Posts with mentions or reviews of lm-evaluation-harness. We have used some of these posts to build our list of alternatives and similar projects. The last one was on 2024-04-09.
  • Mistral AI Launches New 8x22B Moe Model
    4 projects | news.ycombinator.com | 9 Apr 2024
    The easiest is to use vllm (https://github.com/vllm-project/vllm) to run it on a Couple of A100's, and you can benchmark this using this library (https://github.com/EleutherAI/lm-evaluation-harness)
  • Show HN: Times faster LLM evaluation with Bayesian optimization
    6 projects | news.ycombinator.com | 13 Feb 2024
    Fair question.

    Evaluate refers to the phase after training to check if the training is good.

    Usually the flow goes training -> evaluation -> deployment (what you called inference). This project is aimed for evaluation. Evaluation can be slow (might even be slower than training if you're finetuning on a small domain specific subset)!

    So there are [quite](https://github.com/microsoft/promptbench) [a](https://github.com/confident-ai/deepeval) [few](https://github.com/openai/evals) [frameworks](https://github.com/EleutherAI/lm-evaluation-harness) working on evaluation, however, all of them are quite slow, because LLM are slow if you don't have infinite money. [This](https://github.com/open-compass/opencompass) one tries to speed up by parallelizing on multiple computers, but none of them takes advantage of the fact that many evaluation queries might be similar and all try to evaluate on all given queries. And that's where this project might come in handy.

  • Language Model Evaluation Harness
    1 project | news.ycombinator.com | 25 Nov 2023
  • Best courses / tutorials on open-source LLM finetuning
    1 project | /r/LLMDevs | 10 Jul 2023
    I haven't run this yet, but I'm aware of Eleuther AI's evaluation harness EleutherAI/lm-evaluation-harness: A framework for few-shot evaluation of autoregressive language models. (github.com) and GPT-4 -based evaluations like lm-sys/FastChat: An open platform for training, serving, and evaluating large language models. Release repo for Vicuna and FastChat-T5. (github.com)
  • Orca-Mini-V2-13b
    1 project | /r/LocalLLaMA | 9 Jul 2023
    Updates: Just finished final evaluation (additional metrics) on https://github.com/EleutherAI/lm-evaluation-harness and have averaged the results for orca-mini-v2-13b. The average results for the Open LLM Leaderboard are not that great, compare to initial metrics. The average is now 0.54675 which put this model below then many other 13b out there.
  • My largest ever quants, GPT 3 sized! BLOOMZ 176B and BLOOMChat 1.0 176B
    6 projects | /r/LocalLLaMA | 6 Jul 2023
    Hey u/The-Bloke Appreciate the quants! What is the degradation on the some benchmarks. Have you seen https://github.com/EleutherAI/lm-evaluation-harness. 3-bit and 2-bit quant will really be pushing it. I don't see a ton of evaluation results on the quants and nice to see a before and after.
  • Dataset of MMLU results broken down by task
    2 projects | /r/datasets | 6 Jul 2023
    I am primarily looking for results of running the MMLU evaluation on modern large language models. I have been able to find some data here https://github.com/EleutherAI/lm-evaluation-harness/tree/master/results and will be asking them if/when, they can provide any additional data.
  • Orca-Mini-V2-7b
    1 project | /r/LocalLLaMA | 3 Jul 2023
    I evaluated orca_mini_v2_7b on a wide range of tasks using Language Model Evaluation Harness from EleutherAI.
  • Why Falcon 40B managed to beat LLaMA 65B?
    1 project | /r/datascience | 19 Jun 2023
  • OpenLLaMA 13B Released
    7 projects | news.ycombinator.com | 18 Jun 2023
    There is the Language Model Evaluation Harness project which evaluates LLMs on over 200 tasks. HuggingFace has a leaderboard tracking performance on a subset of these tasks.

    https://github.com/EleutherAI/lm-evaluation-harness

    https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderb...

What are some alternatives?

When comparing aitextgen and lm-evaluation-harness you can also consider the following projects:

DiscordChatAI-GPT2 - A chat AI discord bot written in python3 using GPT-2, trained on data scraped from every message of my discord server (can be trained on yours too)

BIG-bench - Beyond the Imitation Game collaborative benchmark for measuring and extrapolating the capabilities of language models

gpt-neo - An implementation of model parallel GPT-2 and GPT-3-style models using the mesh-tensorflow library.

transformers - 🤗 Transformers: State-of-the-art Machine Learning for Pytorch, TensorFlow, and JAX.

StableLM - StableLM: Stability AI Language Models

nanoGPT - The simplest, fastest repository for training/finetuning medium-sized GPTs.

gpt-neox - An implementation of model parallel autoregressive transformers on GPUs, based on the DeepSpeed library.

trump_gpt2_bot - aitextgen (aka GPT-2) Twitter bot

gpt4all - gpt4all: run open-source LLMs anywhere

koboldcpp - A simple one-file way to run various GGML and GGUF models with KoboldAI's UI