picoGPT
micrograd
picoGPT | micrograd | |
---|---|---|
7 | 22 | |
3,081 | 8,397 | |
- | - | |
1.9 | 0.0 | |
about 1 year ago | 7 days ago | |
Python | Jupyter Notebook | |
MIT License | MIT License |
Stars - the number of stars that a project has on GitHub. Growth - month over month growth in stars.
Activity is a relative number indicating how actively a project is being developed. Recent commits have higher weight than older ones.
For example, an activity of 9.0 indicates that a project is amongst the top 10% of the most actively developed projects that we are tracking.
picoGPT
-
Understanding Automatic Differentiation in 30 lines of Python
In that case, you might also enjoy https://jaykmody.com/blog/gpt-from-scratch/
(here's the raw code: https://github.com/jaymody/picoGPT/blob/main/gpt2.py)
-
Transformers from Scratch
I wrote a minimal implementation in NumPy here (the forward pass code is only 40 lines): https://github.com/jaymody/picoGPT
Although this is for a decoder-only transformer (aka GPT) and doesnt include the encoder part.
- FLaNK Stack Weekly 3 April 2023
-
GPT-4 Says an Open-Source Chatbot Vicuna Reaches 90% ChatGPT Quality
Take a look at https://github.com/jaymody/picoGPT/blob/a750c145ba4d09d57648...
Yes, this is GPT-2 not 4 and it‘s not the Chat, only the model and it‘s basically only the inference part, not the training loop and it‘s somewhat simplified.
Still, take a good look.
That‘s essentially what it is and a single sheet of paper.
There is nothing specifically about language in „language model“, we just call it that. Better to call it just LLM.
Nobody knows exactly what it learns, although there would be ways to poke around given some research programs. But it seems like the interest in that is limited currently, everyone is busy with improving it or with applications.
Perhaps the answer is that we overestimated what a mind is. It‘s like we used to ask what life is and it turned out that there is nothing special about life, not even the DNA is controlling anything. It‘s merely a chemical process, even though a complex process.
-
u/functor7 explains why AIs like ChatGPT do not "understand" their subject
(The hardest part was just designing a math function that has the capability of getting good at this game, but when all is said and done, it need not be a whole lot of code).
- PicoGPT: An unnecessarily tiny implementation of GPT-2 in NumPy
- picoGPT: An unnecessarily tiny implementation of GPT-2 in NumPy
micrograd
-
Micrograd-CUDA: adapting Karpathy's tiny autodiff engine for GPU acceleration
I recently decided to turbo-teach myself basic cuda with a proper project. I really enjoyed Karpathy’s micrograd (https://github.com/karpathy/micrograd), so I extended it with cuda kernels and 2D tensor logic. It’s a bit longer than the original project, but it’s still very readable for anyone wanting to quickly learn about gpu acceleration in practice.
-
Stuff we figured out about AI in 2023
FOr inference, less than 1KLOC of pure, dependency-free C is enough (if you include the tokenizer and command line parsing)[1]. This was a non-obvious fact for me, in principle, you could run a modern LLM 20 years ago with just 1000 lines of code, assuming you're fine with things potentially taking days to run of course.
Training wouldn't be that much harder, Micrograd[2] is 200LOC of pure Python, 1000 lines would probably be enough for training an (extremely slow) LLM. By "extremely slow", I mean that a training run that normally takes hours could probably take dozens of years, but the results would, in principle, be the same.
If you were writing in C instead of Python and used something like Llama CPP's optimization tricks, you could probably get somewhat acceptable training performance in 2 or 3 KLOC. You'd still be off by one or two orders of magnitude when compared to a GPU cluster, but a lot better than naive, loopy Python.
[1] https://github.com/karpathy/llama2.c
[2] https://github.com/karpathy/micrograd
-
Writing a C compiler in 500 lines of Python
Perhaps they were thinking of https://github.com/karpathy/micrograd
- Linear Algebra for Programmers
- Understanding Automatic Differentiation in 30 lines of Python
-
Newbie question: Is there overloading of Haskell function signature?
I was (for fun) trying to recreate micrograd in Haskell. The ideia is simple:
-
[D] Backpropagation is not just the chain-rule, then what is it?
Check out this repo I found a few years back when I was looking into understanding pytorch better. It's basically a super tiny autodiff library that only works on scalars. The whole repo is under 200 lines of code, so you can pull up pycharm or whatever and step through the code and see how it all comes together. Or... you know. Just read it, it's not super complicated.
-
Neural Networks: Zero to Hero
I'm doing an ML apprenticeship [1] these weeks and Karpathy's videos are part of it. We've been deep down into them. I found them excellent. All concepts he illustrates are crystal clear in his mind (even though they are complicated concepts themselves) and that shows in his explanations.
Also, the way he builds up everything is magnificent. Starting from basic python classes, to derivatives and gradient descent, to micrograd [2] and then from a bigram counting model [3] to makemore [4] and nanoGPT [5]
[1]: https://www.foundersandcoders.com/ml
[2]: https://github.com/karpathy/micrograd
[3]: https://github.com/karpathy/randomfun/blob/master/lectures/m...
[4]: https://github.com/karpathy/makemore
[5]: https://github.com/karpathy/nanoGPT
-
Rustygrad - A tiny Autograd engine inspired by micrograd
Just published my first crate, rustygrad, a Rust implementation of Andrej Karpathy's micrograd!
-
Hey Rustaceans! Got a question? Ask here (10/2023)!
I've been trying to reimplement Karpathy's micrograd library in rust as a fun side project.
What are some alternatives?
gpt4all - gpt4all: run open-source LLMs anywhere
deepnet - Educational deep learning library in plain Numpy.
glances - Glances an Eye on your system. A top/htop alternative for GNU/Linux, BSD, Mac OS and Windows operating systems.
tinygrad - You like pytorch? You like micrograd? You love tinygrad! ❤️ [Moved to: https://github.com/tinygrad/tinygrad]
taskwarrior - Taskwarrior - Command line Task Management
deeplearning-notes - Notes for Deep Learning Specialization Courses led by Andrew Ng.
ctop - Top-like interface for container metrics
ML-From-Scratch - Machine Learning From Scratch. Bare bones NumPy implementations of machine learning models and algorithms with a focus on accessibility. Aims to cover everything from linear regression to deep learning.
Tensor-Puzzles - Solve puzzles. Improve your pytorch.
NNfSiX - Neural Networks from Scratch in various programming languages
exiftool - ExifTool meta information reader/writer
yolov7 - Implementation of paper - YOLOv7: Trainable bag-of-freebies sets new state-of-the-art for real-time object detectors