nanoGPT

The simplest, fastest repository for training/finetuning medium-sized GPTs. (by karpathy)

nanoGPT Alternatives

Similar projects and alternatives to nanoGPT

NOTE: The number of mentions on this list indicates mentions on common posts plus user suggested alternatives. Hence, a higher number means a better nanoGPT alternative or higher similarity.

nanoGPT discussion

Log in or Post with
  1. User avatar
    plavenderfields
    ยท 4 months ago
    ยท Reply

    Review โ˜…โ˜…โ˜…โ˜†โ˜† 5/10

nanoGPT reviews and mentions

Posts with mentions or reviews of nanoGPT. We have used some of these posts to build our list of alternatives and similar projects. The last one was on 2024-07-11.
  • Probably Pay Attention to Tokenizers
    1 project | news.ycombinator.com | 24 Oct 2024
    You would have to train the new model from scratch since it would be all new token embeddings with whatever character encoding scheme you come up with. It would probably make sense to train the vanilla gpt from scratch with the same total embeddings size as your control. I would start with https://github.com/karpathy/nanoGPT as a baseline since you can train a toy (GPT2 sized) llm in a couple days on an a100 which are pretty easy to come by.
  • Tiny Shakespeare, of the good old char-RNN fame
    1 project | news.ycombinator.com | 4 Sep 2024
  • FlashAttention-3: Fast and Accurate Attention with Asynchrony and Low-Precision
    5 projects | news.ycombinator.com | 11 Jul 2024
    There are a bunch of good answers, but I wanted to succinctly say "practically, quite a bit". Here's a good little rabbit-hole example:

    > https://github.com/karpathy/nanoGPT/blob/master/model.py#L45

    Karpathy's nanoGPT calling flash attention by checking if torch.nn.functional.scaled_dot_product_attention exists

    > https://pytorch.org/docs/stable/generated/torch.nn.functiona...

    Looking at the docs, in reality, most of the time you want this to call out to FA2 which optimizes the kernals on the device to split ops on the Softmax of the triangular matrix as well as reduce moving unnecessary batches of floating point numbers back and forth from the GPU to the CPU.

    > https://arxiv.org/pdf/2307.08691

    The paper for FA2 almost entirely considers itself through the hardware it's running on.

  • NanoGPT: The simplest, fastest repository for training medium-sized GPTs
    2 projects | news.ycombinator.com | 10 Jun 2024
  • Show HN: Predictive Text Using Only 13KB of JavaScript. No LLM
    3 projects | news.ycombinator.com | 1 Mar 2024
    Nice work! I built something similar years ago and I did compile the probabilities based on a corpus of text (public domain books) in an attempt to produce writing in the style of various authors. The results were actually quite similar to the output of nanoGPT[0]. It was very unoptimized and everything was kept in memory. I also knew nothing about embeddings at the time and only a little about NLP techniques that would certainly have helped. Using a graph database would have probably been better than the datastructure I came up with at the time. You should look into stuff like Datalog, Tries[1], and N-Triples[2] for more inspiration.

    You're idea of splitting the probabilities based on whether you're starting the sentence or finishing it is interesting but you might be able to benefit from an approach that creates a "window" of text you can use for lookup, using an LCS[3] algorithm could do that. There's probably a lot of optimization you could do based on the probabilities of different sequences, I think this was the fundamental thing I was exploring in my project.

    Seeing this has inspired me further to consider working on that project again at some point.

    [0] https://github.com/karpathy/nanoGPT

    [1] https://en.wikipedia.org/wiki/Trie

    [2] https://en.wikipedia.org/wiki/N-Triples

    [3] https://en.wikipedia.org/wiki/Longest_common_subsequence

  • LLMs Learn to Be "Generative"
    1 project | news.ycombinator.com | 4 Feb 2024
    where x1 denotes the 1st token, x2 denotes the 2nd token and so on, respectively.

    I understand the conditional terms p(x_n|...) where we use cross-entropy to calculate their losses. However, I'm unsure about the probability of the very first token p(x1). How is it calculated? Is it in some configurations of the training process, or in the model architecture, or in the loss function?

    IMHO, if the model doesn't learn p(x1) properly, the entire formula for Bayes' rule cannot be completed, and we can't refer to LLMs as "truly generative". Am I missing something here?

    I asked the same question on nanoGPT repo: https://github.com/karpathy/nanoGPT/issues/432, but I haven't found the answer I'm looking for yet. Could someone please enlighten me.

  • A simulation of me: fine-tuning an LLM on 240k text messages
    2 projects | news.ycombinator.com | 4 Jan 2024
    This repo, albeit "old" in regards to how much progress there's been in LLMs, has great simple tutorials right there eg. fine-tuning GPT2 with Shakespeare: https://github.com/karpathy/nanoGPT
  • Ask HN: Is it feasible to train my own LLM?
    3 projects | news.ycombinator.com | 2 Jan 2024
    For training from scratch, maybe a small model like https://github.com/karpathy/nanoGPT or tinyllama. Perhaps with quantization.
  • Writing a C compiler in 500 lines of Python
    4 projects | news.ycombinator.com | 4 Sep 2023
    It does remind me of a project [1] Andrej Karpathy did, writing a neural network and training code in ~600 lines (although networks have easier logic to code than a compiler).

    [1] https://github.com/karpathy/nanoGPT

  • [D] Can GPT "understand"?
    1 project | /r/MachineLearning | 20 Aug 2023
    But I'm still not convinced that it can't in theory. Maybe the training set or transformer size I'm using is too small. I'm using nanoGPT implementation (https://github.com/karpathy/nanoGPT) with layers 24, heads 12, and embeddings per head 32. I'm using character-based vocab: every digit is a separate token, +, = and EOL.
  • A note from our sponsor - SaaSHub
    www.saashub.com | 2 Nov 2024
    SaaSHub helps you find the best software and product alternatives Learn more โ†’

Stats

Basic nanoGPT repo stats
73
36,988
4.1
3 months ago

Sponsored
Free Django app performance insights with Scout Monitoring
Get Scout setup in minutes, and let us sweat the small stuff. A couple lines in settings.py is all you need to start monitoring your apps. Sign up for our free tier today.
www.scoutapm.com

Did you konow that Python is
the 1st most popular programming language
based on number of metions?