SaaSHub helps you find the best software and product alternatives Learn more โ
nanoGPT Alternatives
Similar projects and alternatives to nanoGPT
-
-
InfluxDB
InfluxDB โ Built for High-Performance Time Series Workloads. InfluxDB 3 OSS is now GA. Transform, enrich, and act on time series data directly in the database. Automate critical tasks and eliminate the need to move data externally. Download now.
-
termux-app
Termux - a terminal emulator application for Android OS extendible by variety of packages.
-
-
Open-Assistant
OpenAssistant is a chat-based assistant that understands tasks, can interact with third-party systems, and retrieve information dynamically to do so.
-
-
-
-
SaaSHub
SaaSHub - Software Alternatives and Reviews. SaaSHub helps you find the best software and product alternatives
-
-
gpt_index
Discontinued LlamaIndex (GPT Index) is a project that provides a central interface to connect your LLM's with external data. [Moved to: https://github.com/jerryjliu/llama_index]
-
-
manticoresearch
Easy to use open source fast database for search | Good alternative to Elasticsearch now | Drop-in replacement for E in the ELK soon
-
minGPT
A minimal PyTorch re-implementation of the OpenAI GPT (Generative Pretrained Transformer) training
-
-
hivemind
Decentralized deep learning in PyTorch. Built to train models on thousands of volunteers across the world.
-
micrograd
A tiny scalar-valued autograd engine and a neural net library on top of it with PyTorch-like API
-
-
-
From-0-to-Research-Scientist-resources-guide
Detailed and tailored guide for undergraduate students or anybody want to dig deep into the field of AI with solid foundation.
-
-
RWKV-LM
RWKV (pronounced RwaKuv) is an RNN with great LLM performance, which can also be directly trained like a GPT transformer (parallelizable). We are at RWKV-7 "Goose". So it's combining the best of RNN and transformer - great performance, linear time, constant space (no kv-cache), fast training, infinite ctx_len, and free sentence embedding.
-
SaaSHub
SaaSHub - Software Alternatives and Reviews. SaaSHub helps you find the best software and product alternatives
nanoGPT discussion
nanoGPT reviews and mentions
- Wolfram: Learning about Innovation from Half a Century of Conway's Game of Life
- The Awesome Power of an LLM in Your Terminal
-
A minimal PyTorch implementation for training your own small LLM from scratch
The one you linked to is based on Karpathy's tutorial: https://www.youtube.com/watch?v=kCc8FmEb1nY, except changing to train on tinystories in stead of Shakespeare. smolGPT also looks inspired by nanoGPT, also from Karpathy: https://github.com/karpathy/nanoGPT/blob/master/train.py
-
Probably Pay Attention to Tokenizers
You would have to train the new model from scratch since it would be all new token embeddings with whatever character encoding scheme you come up with. It would probably make sense to train the vanilla gpt from scratch with the same total embeddings size as your control. I would start with https://github.com/karpathy/nanoGPT as a baseline since you can train a toy (GPT2 sized) llm in a couple days on an a100 which are pretty easy to come by.
- Tiny Shakespeare, of the good old char-RNN fame
-
FlashAttention-3: Fast and Accurate Attention with Asynchrony and Low-Precision
There are a bunch of good answers, but I wanted to succinctly say "practically, quite a bit". Here's a good little rabbit-hole example:
> https://github.com/karpathy/nanoGPT/blob/master/model.py#L45
Karpathy's nanoGPT calling flash attention by checking if torch.nn.functional.scaled_dot_product_attention exists
> https://pytorch.org/docs/stable/generated/torch.nn.functiona...
Looking at the docs, in reality, most of the time you want this to call out to FA2 which optimizes the kernals on the device to split ops on the Softmax of the triangular matrix as well as reduce moving unnecessary batches of floating point numbers back and forth from the GPU to the CPU.
> https://arxiv.org/pdf/2307.08691
The paper for FA2 almost entirely considers itself through the hardware it's running on.
- NanoGPT: The simplest, fastest repository for training medium-sized GPTs
-
Show HN: Predictive Text Using Only 13KB of JavaScript. No LLM
Nice work! I built something similar years ago and I did compile the probabilities based on a corpus of text (public domain books) in an attempt to produce writing in the style of various authors. The results were actually quite similar to the output of nanoGPT[0]. It was very unoptimized and everything was kept in memory. I also knew nothing about embeddings at the time and only a little about NLP techniques that would certainly have helped. Using a graph database would have probably been better than the datastructure I came up with at the time. You should look into stuff like Datalog, Tries[1], and N-Triples[2] for more inspiration.
You're idea of splitting the probabilities based on whether you're starting the sentence or finishing it is interesting but you might be able to benefit from an approach that creates a "window" of text you can use for lookup, using an LCS[3] algorithm could do that. There's probably a lot of optimization you could do based on the probabilities of different sequences, I think this was the fundamental thing I was exploring in my project.
Seeing this has inspired me further to consider working on that project again at some point.
[0] https://github.com/karpathy/nanoGPT
[1] https://en.wikipedia.org/wiki/Trie
[2] https://en.wikipedia.org/wiki/N-Triples
[3] https://en.wikipedia.org/wiki/Longest_common_subsequence
-
LLMs Learn to Be "Generative"
where x1 denotes the 1st token, x2 denotes the 2nd token and so on, respectively.
I understand the conditional terms p(x_n|...) where we use cross-entropy to calculate their losses. However, I'm unsure about the probability of the very first token p(x1). How is it calculated? Is it in some configurations of the training process, or in the model architecture, or in the loss function?
IMHO, if the model doesn't learn p(x1) properly, the entire formula for Bayes' rule cannot be completed, and we can't refer to LLMs as "truly generative". Am I missing something here?
I asked the same question on nanoGPT repo: https://github.com/karpathy/nanoGPT/issues/432, but I haven't found the answer I'm looking for yet. Could someone please enlighten me.
-
A simulation of me: fine-tuning an LLM on 240k text messages
This repo, albeit "old" in regards to how much progress there's been in LLMs, has great simple tutorials right there eg. fine-tuning GPT2 with Shakespeare: https://github.com/karpathy/nanoGPT
-
A note from our sponsor - SaaSHub
www.saashub.com | 15 May 2025
Stats
karpathy/nanoGPT is an open source project licensed under MIT License which is an OSI approved license.
The primary programming language of nanoGPT is Python.
Review โ โ โ โโ 5/10