micrograd
deeplearning-notes
Our great sponsors
micrograd | deeplearning-notes | |
---|---|---|
22 | 71 | |
8,273 | 353 | |
- | - | |
0.0 | 0.0 | |
5 days ago | over 1 year ago | |
Jupyter Notebook | ||
MIT License | MIT License |
Stars - the number of stars that a project has on GitHub. Growth - month over month growth in stars.
Activity is a relative number indicating how actively a project is being developed. Recent commits have higher weight than older ones.
For example, an activity of 9.0 indicates that a project is amongst the top 10% of the most actively developed projects that we are tracking.
micrograd
-
Micrograd-CUDA: adapting Karpathy's tiny autodiff engine for GPU acceleration
I recently decided to turbo-teach myself basic cuda with a proper project. I really enjoyed Karpathy’s micrograd (https://github.com/karpathy/micrograd), so I extended it with cuda kernels and 2D tensor logic. It’s a bit longer than the original project, but it’s still very readable for anyone wanting to quickly learn about gpu acceleration in practice.
-
Stuff we figured out about AI in 2023
FOr inference, less than 1KLOC of pure, dependency-free C is enough (if you include the tokenizer and command line parsing)[1]. This was a non-obvious fact for me, in principle, you could run a modern LLM 20 years ago with just 1000 lines of code, assuming you're fine with things potentially taking days to run of course.
Training wouldn't be that much harder, Micrograd[2] is 200LOC of pure Python, 1000 lines would probably be enough for training an (extremely slow) LLM. By "extremely slow", I mean that a training run that normally takes hours could probably take dozens of years, but the results would, in principle, be the same.
If you were writing in C instead of Python and used something like Llama CPP's optimization tricks, you could probably get somewhat acceptable training performance in 2 or 3 KLOC. You'd still be off by one or two orders of magnitude when compared to a GPU cluster, but a lot better than naive, loopy Python.
[1] https://github.com/karpathy/llama2.c
[2] https://github.com/karpathy/micrograd
-
Writing a C compiler in 500 lines of Python
Perhaps they were thinking of https://github.com/karpathy/micrograd
- Linear Algebra for Programmers
- Understanding Automatic Differentiation in 30 lines of Python
-
Newbie question: Is there overloading of Haskell function signature?
I was (for fun) trying to recreate micrograd in Haskell. The ideia is simple:
-
[D] Backpropagation is not just the chain-rule, then what is it?
Check out this repo I found a few years back when I was looking into understanding pytorch better. It's basically a super tiny autodiff library that only works on scalars. The whole repo is under 200 lines of code, so you can pull up pycharm or whatever and step through the code and see how it all comes together. Or... you know. Just read it, it's not super complicated.
-
Neural Networks: Zero to Hero
I'm doing an ML apprenticeship [1] these weeks and Karpathy's videos are part of it. We've been deep down into them. I found them excellent. All concepts he illustrates are crystal clear in his mind (even though they are complicated concepts themselves) and that shows in his explanations.
Also, the way he builds up everything is magnificent. Starting from basic python classes, to derivatives and gradient descent, to micrograd [2] and then from a bigram counting model [3] to makemore [4] and nanoGPT [5]
[1]: https://www.foundersandcoders.com/ml
[2]: https://github.com/karpathy/micrograd
[3]: https://github.com/karpathy/randomfun/blob/master/lectures/m...
[4]: https://github.com/karpathy/makemore
[5]: https://github.com/karpathy/nanoGPT
-
Rustygrad - A tiny Autograd engine inspired by micrograd
Just published my first crate, rustygrad, a Rust implementation of Andrej Karpathy's micrograd!
-
Hey Rustaceans! Got a question? Ask here (10/2023)!
I've been trying to reimplement Karpathy's micrograd library in rust as a fun side project.
deeplearning-notes
-
Intuition for LSTM cell structure
If you want in-depth understanding then I would recommend you to look for Deep Learning Specialization by Andrew Ng. (Course 4). He explained the LSTM and GRU cells in detail (mathematically). You can also find it on YouTube I guess. Hope it helps.
-
[D] Best deep learning course?
Best place to get started https://www.coursera.org/specializations/deep-learning
- Which course from deeplearning.ai should I take first? There are so many now
-
Where to go from here
I want to expand on what I learnt theory and practice to be able to complete a project, where I can download a video and run it through my model and it will be able highlight specified items, e.g people trees, cars. Will this course help me get there https://www.coursera.org/specializations/deep-learning
-
This is my self-learning curriculum for ML. Hope it helps and open to feedback!
Another one from DeepLearning.ai and this is also the most popular course for Deep Learning and Neural Networks - https://www.coursera.org/specializations/deep-learning
- AI roadmap
-
Assignments to practice for course "neural-networks-deep-learning"
This course is a part of one of the 5 courses in DL specialization: https://www.coursera.org/specializations/deep-learning. I am taking this course on Coursera where I have finished up to week 2. Now I need to practice for it, but I think I can't access assignments as its locked for paid viewers. Can someone share me the resources for practice or any alternatives you found useful?
-
Coursera or Udacity for TF developer certificate
There is another [course] (https://www.coursera.org/specializations/deep-learning) by deeplearning ai that catched my eye and in review they say its more in detail than tf in practice course.
-
Career in Computer Vision - Best way to spool up through OMSCS
Deep learning like others said, but I've seen some posts recommending taking an external class, like Andrew Ng's Coursera class https://www.coursera.org/specializations/deep-learning over the GT one. I haven't taken or plan on taking the GT one but some people found it lacking
-
How relevant is “A super harsh guide to machine learning” for someone who is just tinkering with machine learning?
My recommendations are worth little, I'm just starting through all this stuff myself. I'm currently taking the Deep Learning specialization on Coursera and trying to map out what else I should be doing.
What are some alternatives?
deepnet - Educational deep learning library in plain Numpy.
coursera-deep-learning-specialization - Notes, programming assignments and quizzes from all courses within the Coursera Deep Learning specialization offered by deeplearning.ai: (i) Neural Networks and Deep Learning; (ii) Improving Deep Neural Networks: Hyperparameter tuning, Regularization and Optimization; (iii) Structuring Machine Learning Projects; (iv) Convolutional Neural Networks; (v) Sequence Models
tinygrad - You like pytorch? You like micrograd? You love tinygrad! ❤️ [Moved to: https://github.com/tinygrad/tinygrad]
Credit_Card_Data_Clustering - Using Gaussian Clustering and PCA Techniques to make clusters of the Credit Car data
ML-From-Scratch - Machine Learning From Scratch. Bare bones NumPy implementations of machine learning models and algorithms with a focus on accessibility. Aims to cover everything from linear regression to deep learning.
Breast_Cancer_DecisionTree_Classifier
NNfSiX - Neural Networks from Scratch in various programming languages
ml-coursera-python-assignments - Python assignments for the machine learning class by andrew ng on coursera with complete submission for grading capability and re-written instructions.
yolov7 - Implementation of paper - YOLOv7: Trainable bag-of-freebies sets new state-of-the-art for real-time object detectors
machine.academy - Neural Network training library in C++ and C# with GPU acceleration
course-nlp - A Code-First Introduction to NLP course