nanoGPT

The simplest, fastest repository for training/finetuning medium-sized GPTs. (by karpathy)

nanoGPT Alternatives

Similar projects and alternatives to nanoGPT

  1. llama.cpp

    LLM inference in C/C++

  2. InfluxDB

    InfluxDB โ€“ Built for High-Performance Time Series Workloads. InfluxDB 3 OSS is now GA. Transform, enrich, and act on time series data directly in the database. Automate critical tasks and eliminate the need to move data externally. Download now.

    InfluxDB logo
  3. termux-app

    Termux - a terminal emulator application for Android OS extendible by variety of packages.

  4. Pytorch

    392 nanoGPT VS Pytorch

    Tensors and Dynamic neural networks in Python with strong GPU acceleration

  5. Open-Assistant

    OpenAssistant is a chat-based assistant that understands tasks, can interact with third-party systems, and retrieve information dynamically to do so.

  6. transformers

    212 nanoGPT VS transformers

    ๐Ÿค— Transformers: State-of-the-art Machine Learning for Pytorch, TensorFlow, and JAX.

  7. whisper.cpp

    Port of OpenAI's Whisper model in C/C++

  8. cheat.sh

    142 nanoGPT VS cheat.sh

    the only cheat sheet you need

  9. SaaSHub

    SaaSHub - Software Alternatives and Reviews. SaaSHub helps you find the best software and product alternatives

    SaaSHub logo
  10. SDV

    59 nanoGPT VS SDV

    Synthetic data generation for tabular data

  11. gpt_index

    48 nanoGPT VS gpt_index

    Discontinued LlamaIndex (GPT Index) is a project that provides a central interface to connect your LLM's with external data. [Moved to: https://github.com/jerryjliu/llama_index]

  12. Made-With-ML

    51 nanoGPT VS Made-With-ML

    Learn how to design, develop, deploy and iterate on production-grade ML applications.

  13. manticoresearch

    Easy to use open source fast database for search | Good alternative to Elasticsearch now | Drop-in replacement for E in the ELK soon

  14. minGPT

    37 nanoGPT VS minGPT

    A minimal PyTorch re-implementation of the OpenAI GPT (Generative Pretrained Transformer) training

  15. simonwillisonblog

    The source code behind my blog

  16. hivemind

    40 nanoGPT VS hivemind

    Decentralized deep learning in PyTorch. Built to train models on thousands of volunteers across the world.

  17. micrograd

    24 nanoGPT VS micrograd

    A tiny scalar-valued autograd engine and a neural net library on top of it with PyTorch-like API

  18. normcap

    18 nanoGPT VS normcap

    OCR powered screen-capture tool to capture information instead of images

  19. nn-zero-to-hero

    10 nanoGPT VS nn-zero-to-hero

    Neural Networks: Zero to Hero

  20. From-0-to-Research-Scientist-resources-guide

    Detailed and tailored guide for undergraduate students or anybody want to dig deep into the field of AI with solid foundation.

  21. ChatGPT

    52 nanoGPT VS ChatGPT

    ๐Ÿ”ฎ ChatGPT Desktop Application (Mac, Windows and Linux)

  22. RWKV-LM

    85 nanoGPT VS RWKV-LM

    RWKV (pronounced RwaKuv) is an RNN with great LLM performance, which can also be directly trained like a GPT transformer (parallelizable). We are at RWKV-7 "Goose". So it's combining the best of RNN and transformer - great performance, linear time, constant space (no kv-cache), fast training, infinite ctx_len, and free sentence embedding.

  23. SaaSHub

    SaaSHub - Software Alternatives and Reviews. SaaSHub helps you find the best software and product alternatives

    SaaSHub logo
NOTE: The number of mentions on this list indicates mentions on common posts plus user suggested alternatives. Hence, a higher number means a better nanoGPT alternative or higher similarity.

nanoGPT discussion

Log in or Post with
  1. User avatar
    plavenderfields
    ยท 11 months ago
    ยท Reply

    Review โ˜…โ˜…โ˜…โ˜†โ˜† 5/10

nanoGPT reviews and mentions

Posts with mentions or reviews of nanoGPT. We have used some of these posts to build our list of alternatives and similar projects. The last one was on 2025-03-19.
  • Wolfram: Learning about Innovation from Half a Century of Conway's Game of Life
    2 projects | news.ycombinator.com | 19 Mar 2025
  • The Awesome Power of an LLM in Your Terminal
    3 projects | dev.to | 18 Mar 2025
  • A minimal PyTorch implementation for training your own small LLM from scratch
    8 projects | news.ycombinator.com | 29 Jan 2025
    The one you linked to is based on Karpathy's tutorial: https://www.youtube.com/watch?v=kCc8FmEb1nY, except changing to train on tinystories in stead of Shakespeare. smolGPT also looks inspired by nanoGPT, also from Karpathy: https://github.com/karpathy/nanoGPT/blob/master/train.py
  • Probably Pay Attention to Tokenizers
    1 project | news.ycombinator.com | 24 Oct 2024
    You would have to train the new model from scratch since it would be all new token embeddings with whatever character encoding scheme you come up with. It would probably make sense to train the vanilla gpt from scratch with the same total embeddings size as your control. I would start with https://github.com/karpathy/nanoGPT as a baseline since you can train a toy (GPT2 sized) llm in a couple days on an a100 which are pretty easy to come by.
  • Tiny Shakespeare, of the good old char-RNN fame
    1 project | news.ycombinator.com | 4 Sep 2024
  • FlashAttention-3: Fast and Accurate Attention with Asynchrony and Low-Precision
    5 projects | news.ycombinator.com | 11 Jul 2024
    There are a bunch of good answers, but I wanted to succinctly say "practically, quite a bit". Here's a good little rabbit-hole example:

    > https://github.com/karpathy/nanoGPT/blob/master/model.py#L45

    Karpathy's nanoGPT calling flash attention by checking if torch.nn.functional.scaled_dot_product_attention exists

    > https://pytorch.org/docs/stable/generated/torch.nn.functiona...

    Looking at the docs, in reality, most of the time you want this to call out to FA2 which optimizes the kernals on the device to split ops on the Softmax of the triangular matrix as well as reduce moving unnecessary batches of floating point numbers back and forth from the GPU to the CPU.

    > https://arxiv.org/pdf/2307.08691

    The paper for FA2 almost entirely considers itself through the hardware it's running on.

  • NanoGPT: The simplest, fastest repository for training medium-sized GPTs
    2 projects | news.ycombinator.com | 10 Jun 2024
  • Show HN: Predictive Text Using Only 13KB of JavaScript. No LLM
    3 projects | news.ycombinator.com | 1 Mar 2024
    Nice work! I built something similar years ago and I did compile the probabilities based on a corpus of text (public domain books) in an attempt to produce writing in the style of various authors. The results were actually quite similar to the output of nanoGPT[0]. It was very unoptimized and everything was kept in memory. I also knew nothing about embeddings at the time and only a little about NLP techniques that would certainly have helped. Using a graph database would have probably been better than the datastructure I came up with at the time. You should look into stuff like Datalog, Tries[1], and N-Triples[2] for more inspiration.

    You're idea of splitting the probabilities based on whether you're starting the sentence or finishing it is interesting but you might be able to benefit from an approach that creates a "window" of text you can use for lookup, using an LCS[3] algorithm could do that. There's probably a lot of optimization you could do based on the probabilities of different sequences, I think this was the fundamental thing I was exploring in my project.

    Seeing this has inspired me further to consider working on that project again at some point.

    [0] https://github.com/karpathy/nanoGPT

    [1] https://en.wikipedia.org/wiki/Trie

    [2] https://en.wikipedia.org/wiki/N-Triples

    [3] https://en.wikipedia.org/wiki/Longest_common_subsequence

  • LLMs Learn to Be "Generative"
    1 project | news.ycombinator.com | 4 Feb 2024
    where x1 denotes the 1st token, x2 denotes the 2nd token and so on, respectively.

    I understand the conditional terms p(x_n|...) where we use cross-entropy to calculate their losses. However, I'm unsure about the probability of the very first token p(x1). How is it calculated? Is it in some configurations of the training process, or in the model architecture, or in the loss function?

    IMHO, if the model doesn't learn p(x1) properly, the entire formula for Bayes' rule cannot be completed, and we can't refer to LLMs as "truly generative". Am I missing something here?

    I asked the same question on nanoGPT repo: https://github.com/karpathy/nanoGPT/issues/432, but I haven't found the answer I'm looking for yet. Could someone please enlighten me.

  • A simulation of me: fine-tuning an LLM on 240k text messages
    2 projects | news.ycombinator.com | 4 Jan 2024
    This repo, albeit "old" in regards to how much progress there's been in LLMs, has great simple tutorials right there eg. fine-tuning GPT2 with Shakespeare: https://github.com/karpathy/nanoGPT
  • A note from our sponsor - SaaSHub
    www.saashub.com | 15 May 2025
    SaaSHub helps you find the best software and product alternatives Learn more โ†’

Stats

Basic nanoGPT repo stats
78
41,190
3.6
5 months ago

Sponsored
InfluxDB โ€“ Built for High-Performance Time Series Workloads
InfluxDB 3 OSS is now GA. Transform, enrich, and act on time series data directly in the database. Automate critical tasks and eliminate the need to move data externally. Download now.
www.influxdata.com

Did you know that Python is
the 2nd most popular programming language
based on number of references?