nanoGPT VS RWKV-LM

Compare nanoGPT vs RWKV-LM and see what are their differences.

nanoGPT

The simplest, fastest repository for training/finetuning medium-sized GPTs. (by karpathy)

RWKV-LM

RWKV is an RNN with transformer-level LLM performance. It can be directly trained like a GPT (parallelizable). So it's combining the best of RNN and transformer - great performance, fast inference, saves VRAM, fast training, "infinite" ctx_len, and free sentence embedding. (by BlinkDL)
InfluxDB - Power Real-Time Data Analytics at Scale
Get real-time insights from all types of time series data with InfluxDB. Ingest, query, and analyze billions of data points in real-time with unbounded cardinality.
www.influxdata.com
featured
SaaSHub - Software Alternatives and Reviews
SaaSHub helps you find the best software and product alternatives
www.saashub.com
featured
nanoGPT RWKV-LM
69 84
31,914 11,657
- -
5.4 8.8
about 1 month ago 3 days ago
Python Python
MIT License Apache License 2.0
The number of mentions indicates the total number of mentions that we've tracked plus the number of user suggested alternatives.
Stars - the number of stars that a project has on GitHub. Growth - month over month growth in stars.
Activity is a relative number indicating how actively a project is being developed. Recent commits have higher weight than older ones.
For example, an activity of 9.0 indicates that a project is amongst the top 10% of the most actively developed projects that we are tracking.

nanoGPT

Posts with mentions or reviews of nanoGPT. We have used some of these posts to build our list of alternatives and similar projects. The last one was on 2024-03-01.
  • Show HN: Predictive Text Using Only 13KB of JavaScript. No LLM
    3 projects | news.ycombinator.com | 1 Mar 2024
    Nice work! I built something similar years ago and I did compile the probabilities based on a corpus of text (public domain books) in an attempt to produce writing in the style of various authors. The results were actually quite similar to the output of nanoGPT[0]. It was very unoptimized and everything was kept in memory. I also knew nothing about embeddings at the time and only a little about NLP techniques that would certainly have helped. Using a graph database would have probably been better than the datastructure I came up with at the time. You should look into stuff like Datalog, Tries[1], and N-Triples[2] for more inspiration.

    You're idea of splitting the probabilities based on whether you're starting the sentence or finishing it is interesting but you might be able to benefit from an approach that creates a "window" of text you can use for lookup, using an LCS[3] algorithm could do that. There's probably a lot of optimization you could do based on the probabilities of different sequences, I think this was the fundamental thing I was exploring in my project.

    Seeing this has inspired me further to consider working on that project again at some point.

    [0] https://github.com/karpathy/nanoGPT

    [1] https://en.wikipedia.org/wiki/Trie

    [2] https://en.wikipedia.org/wiki/N-Triples

    [3] https://en.wikipedia.org/wiki/Longest_common_subsequence

  • LLMs Learn to Be "Generative"
    1 project | news.ycombinator.com | 4 Feb 2024
    where x1 denotes the 1st token, x2 denotes the 2nd token and so on, respectively.

    I understand the conditional terms p(x_n|...) where we use cross-entropy to calculate their losses. However, I'm unsure about the probability of the very first token p(x1). How is it calculated? Is it in some configurations of the training process, or in the model architecture, or in the loss function?

    IMHO, if the model doesn't learn p(x1) properly, the entire formula for Bayes' rule cannot be completed, and we can't refer to LLMs as "truly generative". Am I missing something here?

    I asked the same question on nanoGPT repo: https://github.com/karpathy/nanoGPT/issues/432, but I haven't found the answer I'm looking for yet. Could someone please enlighten me.

  • A simulation of me: fine-tuning an LLM on 240k text messages
    2 projects | news.ycombinator.com | 4 Jan 2024
    This repo, albeit "old" in regards to how much progress there's been in LLMs, has great simple tutorials right there eg. fine-tuning GPT2 with Shakespeare: https://github.com/karpathy/nanoGPT
  • Ask HN: Is it feasible to train my own LLM?
    3 projects | news.ycombinator.com | 2 Jan 2024
    For training from scratch, maybe a small model like https://github.com/karpathy/nanoGPT or tinyllama. Perhaps with quantization.
  • Writing a C compiler in 500 lines of Python
    4 projects | news.ycombinator.com | 4 Sep 2023
    It does remind me of a project [1] Andrej Karpathy did, writing a neural network and training code in ~600 lines (although networks have easier logic to code than a compiler).

    [1] https://github.com/karpathy/nanoGPT

  • [D] Can GPT "understand"?
    1 project | /r/MachineLearning | 20 Aug 2023
    But I'm still not convinced that it can't in theory. Maybe the training set or transformer size I'm using is too small. I'm using nanoGPT implementation (https://github.com/karpathy/nanoGPT) with layers 24, heads 12, and embeddings per head 32. I'm using character-based vocab: every digit is a separate token, +, = and EOL.
  • Transformer Attention is off by one
    4 projects | news.ycombinator.com | 24 Jul 2023
    https://github.com/karpathy/nanoGPT/blob/f08abb45bd2285627d1...

    At training time, probabilities for the next token are computed for each position, so if we feed in a sequence of n tokens, we basically get n training examples, one for each position, but at inference time, we only compute the next token since we’ve already output the preceding ones.

  • Sarah Silverman Sues ChatGPT Creator for Copyright Infringement
    1 project | /r/books | 10 Jul 2023
    And there are a bunch of other efforts at making training more efficient. Here's a cool model by Karpathy (OpenAI/used to head up Tesla's efforts): https://github.com/karpathy/nanoGPT
  • Douglas Hofstadter changes his mind on Deep Learning and AI risk
    2 projects | news.ycombinator.com | 3 Jul 2023
    Just being a part of any auto-regressive system does not contradict his statement.

    Go look at the GPT training code, here is the exact line: https://github.com/karpathy/nanoGPT/blob/master/train.py#L12...

    The model is only trained to predict the next token. The training regime is purely next-token prediction. There is no loopiness whatsoever here, strange or ordinary.

    Just because you take that feedforward neural network and wrap it in a loop to feed it its own output does not change the architecture of the neural net itself. The neural network was trained in one direction and runs in one direction. Hofstadter is surprised that such an architecture yields something that looks like intelligence.

    He specifically used the correct term "feedforward" to constrast with recurrent neural networks, which GPT is not: https://en.wikipedia.org/wiki/Feedforward_neural_network

  • NTK-Aware Scaled RoPE allows LLaMA models to have extended (8k+) context size without any fine-tuning and minimal perplexity degradation.
    1 project | /r/LocalLLaMA | 30 Jun 2023
    Does anyone have or know of an example implementation in plain pytorch, not huggingface transformers. Like something you could plug into https://github.com/karpathy/nanoGPT ?

RWKV-LM

Posts with mentions or reviews of RWKV-LM. We have used some of these posts to build our list of alternatives and similar projects. The last one was on 2023-12-09.
  • Do LLMs need a context window?
    1 project | news.ycombinator.com | 25 Dec 2023
    https://github.com/BlinkDL/RWKV-LM#rwkv-discord-httpsdiscord... lists a number of implementations of various versions of RWKV.

    https://github.com/BlinkDL/RWKV-LM#rwkv-parallelizable-rnn-w... :

    > RWKV: Parallelizable RNN with Transformer-level LLM Performance (pronounced as "RwaKuv", from 4 major params: R W K V)

    > RWKV is an RNN with Transformer-level LLM performance, which can also be directly trained like a GPT transformer (parallelizable). And it's 100% attention-free. You only need the hidden state at position t to compute the state at position t+1. You can use the "GPT" mode to quickly compute the hidden state for the "RNN" mode.

    > So it's combining the best of RNN and transformer - great performance, fast inference, saves VRAM, fast training, "infinite" ctx_len, and free sentence embedding (using the final hidden state).

    > "Our latest version is RWKV-6,*

  • People who've used RWKV, whats your wishlist for it?
    9 projects | /r/LocalLLaMA | 9 Dec 2023
  • Paving the way to efficient architectures: StripedHyena-7B
    1 project | news.ycombinator.com | 8 Dec 2023
  • Understanding Deep Learning
    1 project | news.ycombinator.com | 26 Nov 2023
    That is not true. There are RNNs with transformer/LLM-like performance. See https://github.com/BlinkDL/RWKV-LM.
  • Q-Transformer: Scalable Reinforcement Learning via Autoregressive Q-Functions
    3 projects | news.ycombinator.com | 19 Sep 2023
    This is what RWKV (https://github.com/BlinkDL/RWKV-LM) was made for, and what it will be good at.

    Wow. Pretty darn cool! <3 :'))))

  • Personal GPT: A tiny AI Chatbot that runs fully offline on your iPhone
    14 projects | /r/ChatGPT | 30 Jun 2023
    Thanks for the support! Two weeks ago, I'd have said longer contexts on small on-device LLMs are at least a year away, but developments from last week seem to indicate that it's well within reach. Once the low hanging product features are done, I think it's a worthy problem to spend a couple of weeks or perhaps even months on. Speaking of context lengths, recurrent models like RWKV technically have infinite context lengths, but in practice the context slowly fades away after a few thousands of tokens.
  • "If you see a startup claiming to possess top-secret results leading to human level AI, they're lying or delusional. Don't believe them!" - Yann LeCun, on the conspiracy theories of "X company has reached AGI in secret"
    1 project | /r/singularity | 26 Jun 2023
    This is the reason there are only a few AI labs, and they show little of the theoretical and scientific understanding you believe is required. Go check their code, there's nothing there. Even the transformer with it's heads and other architectural elements turns out to not do anything and it is less efficient than RNNs. (see https://github.com/BlinkDL/RWKV-LM)
  • The Secret Sauce behind 100K context window in LLMs: all tricks in one place
    3 projects | news.ycombinator.com | 17 Jun 2023
    I've been pondering the same thing, as simply extending the context window in a straightforward manner would lead to a significant increase in computational resources. I've had the opportunity to experiment with Anthropics' 100k model, and it's evident that they're employing some clever techniques to make it work, albeit with some imperfections. One interesting observation is that their prompt guide recommends placing instructions after the reference text when inputting lengthy text bodies. I noticed that the model often disregarded the instructions if placed beforehand. It's clear that the model doesn't allocate the same level of "attention" to all parts of the input across the entire context window.

    Moreover, the inability to cache transformers makes the use of large context windows quite costly, as all previous messages must be sent with each call. In this context, the RWKV-LM project on GitHub (https://github.com/BlinkDL/RWKV-LM) might offer a solution. They claim to achieve performance comparable to transformers using an RNN, which could potentially handle a 100-page document and cache it, thereby eliminating the need to process the entire document with each subsequent query. However, I suspect RWKV might fall short in handling complex tasks that require maintaining multiple variables in memory, such as mathematical computations, but it should suffice for many scenarios.

    On a related note, I believe Anthropics' Claude is somewhat underappreciated. In some instances, it outperforms GPT4, and I'd rank it somewhere between GPT4 and Bard overall.

  • Meta's plan to offer free commercial AI models puts pressure on Google, OpenAI
    1 project | news.ycombinator.com | 16 Jun 2023
    > The only reason open-source LLMs have a heartbeat is they’re standing on Meta’s weights.

    Not necessarily.

    RWKV, for example, is a different architecture that wasn't based on Facebook's weights whatsoever. I don't know where BlinkDL (the author) got the training data, but they seem to have done everything mostly independently otherwise.

    https://github.com/BlinkDL/RWKV-LM

    disclaimer: I've been doing a lot of work lately on an implementation of CPU inference for this model, so I'm obviously somewhat biased since this is the model I have the most experience in.

  • Eliezer Yudkowsky - open letter on AI
    1 project | /r/HPMOR | 15 Jun 2023
    I think the main concern is that, due to the resources put into LLM research for finding new ways to refine and improve them, that work can then be used by projects that do go the extra mile and create things that are more than just LLMs. For example, RWKV is similar to an LLM but will actually change its own model after every processed token, thus letting it remember things longer-term without the use of 'context tokens'.

What are some alternatives?

When comparing nanoGPT and RWKV-LM you can also consider the following projects:

minGPT - A minimal PyTorch re-implementation of the OpenAI GPT (Generative Pretrained Transformer) training

llama - Inference code for Llama models

PaLM-rlhf-pytorch - Implementation of RLHF (Reinforcement Learning with Human Feedback) on top of the PaLM architecture. Basically ChatGPT but with PaLM

alpaca-lora - Instruct-tune LLaMA on consumer hardware

ChatGPT - 🔮 ChatGPT Desktop Application (Mac, Windows and Linux)

flash-attention - Fast and memory-efficient exact attention

nn-zero-to-hero - Neural Networks: Zero to Hero

koboldcpp - A simple one-file way to run various GGML and GGUF models with KoboldAI's UI

gpt_index - LlamaIndex (GPT Index) is a project that provides a central interface to connect your LLM's with external data. [Moved to: https://github.com/jerryjliu/llama_index]

gpt4all - gpt4all: run open-source LLMs anywhere

aitextgen - A robust Python tool for text-based AI training and generation using GPT-2.

RWKV-CUDA - The CUDA version of the RWKV language model ( https://github.com/BlinkDL/RWKV-LM )