SaaSHub helps you find the best software and product alternatives Learn more โ
nanoGPT Alternatives
Similar projects and alternatives to nanoGPT
-
-
Scout Monitoring
Free Django app performance insights with Scout Monitoring. Get Scout setup in minutes, and let us sweat the small stuff. A couple lines in settings.py is all you need to start monitoring your apps. Sign up for our free tier today.
-
termux-app
Termux - a terminal emulator application for Android OS extendible by variety of packages.
-
-
Open-Assistant
OpenAssistant is a chat-based assistant that understands tasks, can interact with third-party systems, and retrieve information dynamically to do so.
-
-
-
-
CodeRabbit
CodeRabbit: AI Code Reviews for Developers. Revolutionize your code reviews with AI. CodeRabbit offers PR summaries, code walkthroughs, 1-click suggestions, and AST-based analysis. Boost productivity and code quality across all major languages with each PR.
-
-
gpt_index
Discontinued LlamaIndex (GPT Index) is a project that provides a central interface to connect your LLM's with external data. [Moved to: https://github.com/jerryjliu/llama_index]
-
-
minGPT
A minimal PyTorch re-implementation of the OpenAI GPT (Generative Pretrained Transformer) training
-
manticoresearch
Easy to use open source fast database for search | Good alternative to Elasticsearch now | Drop-in replacement for E in the ELK soon
-
hivemind
Decentralized deep learning in PyTorch. Built to train models on thousands of volunteers across the world.
-
-
micrograd
A tiny scalar-valued autograd engine and a neural net library on top of it with PyTorch-like API
-
-
-
From-0-to-Research-Scientist-resources-guide
Detailed and tailored guide for undergraduate students or anybody want to dig deep into the field of AI with solid foundation.
-
RWKV-LM
RWKV is an RNN with transformer-level LLM performance. It can be directly trained like a GPT (parallelizable). So it's combining the best of RNN and transformer - great performance, fast inference, saves VRAM, fast training, "infinite" ctx_len, and free sentence embedding.
-
PaLM-rlhf-pytorch
Implementation of RLHF (Reinforcement Learning with Human Feedback) on top of the PaLM architecture. Basically ChatGPT but with PaLM
-
SaaSHub
SaaSHub - Software Alternatives and Reviews. SaaSHub helps you find the best software and product alternatives
nanoGPT discussion
nanoGPT reviews and mentions
-
Probably Pay Attention to Tokenizers
You would have to train the new model from scratch since it would be all new token embeddings with whatever character encoding scheme you come up with. It would probably make sense to train the vanilla gpt from scratch with the same total embeddings size as your control. I would start with https://github.com/karpathy/nanoGPT as a baseline since you can train a toy (GPT2 sized) llm in a couple days on an a100 which are pretty easy to come by.
- Tiny Shakespeare, of the good old char-RNN fame
-
FlashAttention-3: Fast and Accurate Attention with Asynchrony and Low-Precision
There are a bunch of good answers, but I wanted to succinctly say "practically, quite a bit". Here's a good little rabbit-hole example:
> https://github.com/karpathy/nanoGPT/blob/master/model.py#L45
Karpathy's nanoGPT calling flash attention by checking if torch.nn.functional.scaled_dot_product_attention exists
> https://pytorch.org/docs/stable/generated/torch.nn.functiona...
Looking at the docs, in reality, most of the time you want this to call out to FA2 which optimizes the kernals on the device to split ops on the Softmax of the triangular matrix as well as reduce moving unnecessary batches of floating point numbers back and forth from the GPU to the CPU.
> https://arxiv.org/pdf/2307.08691
The paper for FA2 almost entirely considers itself through the hardware it's running on.
- NanoGPT: The simplest, fastest repository for training medium-sized GPTs
-
Show HN: Predictive Text Using Only 13KB of JavaScript. No LLM
Nice work! I built something similar years ago and I did compile the probabilities based on a corpus of text (public domain books) in an attempt to produce writing in the style of various authors. The results were actually quite similar to the output of nanoGPT[0]. It was very unoptimized and everything was kept in memory. I also knew nothing about embeddings at the time and only a little about NLP techniques that would certainly have helped. Using a graph database would have probably been better than the datastructure I came up with at the time. You should look into stuff like Datalog, Tries[1], and N-Triples[2] for more inspiration.
You're idea of splitting the probabilities based on whether you're starting the sentence or finishing it is interesting but you might be able to benefit from an approach that creates a "window" of text you can use for lookup, using an LCS[3] algorithm could do that. There's probably a lot of optimization you could do based on the probabilities of different sequences, I think this was the fundamental thing I was exploring in my project.
Seeing this has inspired me further to consider working on that project again at some point.
[0] https://github.com/karpathy/nanoGPT
[1] https://en.wikipedia.org/wiki/Trie
[2] https://en.wikipedia.org/wiki/N-Triples
[3] https://en.wikipedia.org/wiki/Longest_common_subsequence
-
LLMs Learn to Be "Generative"
where x1 denotes the 1st token, x2 denotes the 2nd token and so on, respectively.
I understand the conditional terms p(x_n|...) where we use cross-entropy to calculate their losses. However, I'm unsure about the probability of the very first token p(x1). How is it calculated? Is it in some configurations of the training process, or in the model architecture, or in the loss function?
IMHO, if the model doesn't learn p(x1) properly, the entire formula for Bayes' rule cannot be completed, and we can't refer to LLMs as "truly generative". Am I missing something here?
I asked the same question on nanoGPT repo: https://github.com/karpathy/nanoGPT/issues/432, but I haven't found the answer I'm looking for yet. Could someone please enlighten me.
-
A simulation of me: fine-tuning an LLM on 240k text messages
This repo, albeit "old" in regards to how much progress there's been in LLMs, has great simple tutorials right there eg. fine-tuning GPT2 with Shakespeare: https://github.com/karpathy/nanoGPT
-
Ask HN: Is it feasible to train my own LLM?
For training from scratch, maybe a small model like https://github.com/karpathy/nanoGPT or tinyllama. Perhaps with quantization.
-
Writing a C compiler in 500 lines of Python
It does remind me of a project [1] Andrej Karpathy did, writing a neural network and training code in ~600 lines (although networks have easier logic to code than a compiler).
[1] https://github.com/karpathy/nanoGPT
-
[D] Can GPT "understand"?
But I'm still not convinced that it can't in theory. Maybe the training set or transformer size I'm using is too small. I'm using nanoGPT implementation (https://github.com/karpathy/nanoGPT) with layers 24, heads 12, and embeddings per head 32. I'm using character-based vocab: every digit is a separate token, +, = and EOL.
-
A note from our sponsor - SaaSHub
www.saashub.com | 2 Nov 2024
Stats
karpathy/nanoGPT is an open source project licensed under MIT License which is an OSI approved license.
The primary programming language of nanoGPT is Python.
Review โ โ โ โโ 5/10