micrograd
ml-coursera-python-assignments
Our great sponsors
micrograd | ml-coursera-python-assignments | |
---|---|---|
22 | 43 | |
8,273 | 5,382 | |
- | - | |
0.0 | 0.0 | |
5 days ago | 11 months ago | |
Jupyter Notebook | Jupyter Notebook | |
MIT License | - |
Stars - the number of stars that a project has on GitHub. Growth - month over month growth in stars.
Activity is a relative number indicating how actively a project is being developed. Recent commits have higher weight than older ones.
For example, an activity of 9.0 indicates that a project is amongst the top 10% of the most actively developed projects that we are tracking.
micrograd
-
Micrograd-CUDA: adapting Karpathy's tiny autodiff engine for GPU acceleration
I recently decided to turbo-teach myself basic cuda with a proper project. I really enjoyed Karpathy’s micrograd (https://github.com/karpathy/micrograd), so I extended it with cuda kernels and 2D tensor logic. It’s a bit longer than the original project, but it’s still very readable for anyone wanting to quickly learn about gpu acceleration in practice.
-
Stuff we figured out about AI in 2023
FOr inference, less than 1KLOC of pure, dependency-free C is enough (if you include the tokenizer and command line parsing)[1]. This was a non-obvious fact for me, in principle, you could run a modern LLM 20 years ago with just 1000 lines of code, assuming you're fine with things potentially taking days to run of course.
Training wouldn't be that much harder, Micrograd[2] is 200LOC of pure Python, 1000 lines would probably be enough for training an (extremely slow) LLM. By "extremely slow", I mean that a training run that normally takes hours could probably take dozens of years, but the results would, in principle, be the same.
If you were writing in C instead of Python and used something like Llama CPP's optimization tricks, you could probably get somewhat acceptable training performance in 2 or 3 KLOC. You'd still be off by one or two orders of magnitude when compared to a GPU cluster, but a lot better than naive, loopy Python.
[1] https://github.com/karpathy/llama2.c
[2] https://github.com/karpathy/micrograd
-
Writing a C compiler in 500 lines of Python
Perhaps they were thinking of https://github.com/karpathy/micrograd
- Linear Algebra for Programmers
- Understanding Automatic Differentiation in 30 lines of Python
-
Newbie question: Is there overloading of Haskell function signature?
I was (for fun) trying to recreate micrograd in Haskell. The ideia is simple:
-
[D] Backpropagation is not just the chain-rule, then what is it?
Check out this repo I found a few years back when I was looking into understanding pytorch better. It's basically a super tiny autodiff library that only works on scalars. The whole repo is under 200 lines of code, so you can pull up pycharm or whatever and step through the code and see how it all comes together. Or... you know. Just read it, it's not super complicated.
-
Neural Networks: Zero to Hero
I'm doing an ML apprenticeship [1] these weeks and Karpathy's videos are part of it. We've been deep down into them. I found them excellent. All concepts he illustrates are crystal clear in his mind (even though they are complicated concepts themselves) and that shows in his explanations.
Also, the way he builds up everything is magnificent. Starting from basic python classes, to derivatives and gradient descent, to micrograd [2] and then from a bigram counting model [3] to makemore [4] and nanoGPT [5]
[1]: https://www.foundersandcoders.com/ml
[2]: https://github.com/karpathy/micrograd
[3]: https://github.com/karpathy/randomfun/blob/master/lectures/m...
[4]: https://github.com/karpathy/makemore
[5]: https://github.com/karpathy/nanoGPT
-
Rustygrad - A tiny Autograd engine inspired by micrograd
Just published my first crate, rustygrad, a Rust implementation of Andrej Karpathy's micrograd!
-
Hey Rustaceans! Got a question? Ask here (10/2023)!
I've been trying to reimplement Karpathy's micrograd library in rust as a fun side project.
ml-coursera-python-assignments
-
[D] Backpropagation is not just the chain-rule, then what is it?
check this out in particular. It's the week 4 homework from Ng's course, redone by someone to be in Python instead of Octave. It's got a built in grader, so you can grab the jupyter notebook, run it locally and it'll tell you when you've got the answer right. I'd recommend taking a crack at it, then when you figure out how to code it, take a look at that micrograd library and see how you could achieve something similar using an object oriented approach.
-
How does Andrew Ng's courses compare to OMSCS ?
Python version of assignments which you can submit: https://github.com/dibgerge/ml-coursera-python-assignments
-
Is the new Andrew Ng specialisation course worth it if I finished the original one with Python exercises?
Basically title. I'm halfway thru the original Stanford University Machine Learning course by Andrew Ng, but instead of using the Octave/Matlab exercises, I went with a Python repo. Now, I know the new specialisation course came out and is updated with newer content, more relevant to the state of the industry today. I have the following choices:
-
Is the Andrew Ng course worth having to learn Octave?
A language is only worth learning if it is useful to know. But the only reason 99% of people would learn Octave is just to take that course lol. Besides, (a) the original course can be completed in Python using this repo, and now his new course is actually offered in Python.
-
What do you think of Andrew Ng's new Machine Learning Specialization that launched last week on Coursera?
FWIW there is a repo you can use to complete the first one in Python. I used it and can vouch that it works perfectly as advertised.
-
Andrew Ng updates his Machine Learning course
You can do them in python and submit them! https://github.com/dibgerge/ml-coursera-python-assignments
- Andrew Ng’s Machine Learning course is relaunching in Python in June 2022
-
[NEWS] Not sure if this has been posted before, but ML course from Coursera is going to be updated in a new version in June (it will include python)
Andrew Ng ML-Coursera Assignments in Python
-
New to ML
Last piece... Octave is super easy to get into. I don't personally think it's worth doing Python versions of the homework, but if you really can't stand screwing around with a new language, this repo has alternate versions of the homework to follow that will use Python instead. You can do either these or the original versions, so don't let the Octave scare you. You don't have to use it if you really don't want to, but... like I said, it's not a big deal either way, I just did it in Octave.
-
Has anyone here done Andrew Ng's ML Course in Python and could help me out with the first assignment?
Specifically, I'm referring to this github repository: https://github.com/dibgerge/ml-coursera-python-assignments/blob/master/Exercise1/exercise1.ipynb. I'm currently doing Assignment 1.
What are some alternatives?
deepnet - Educational deep learning library in plain Numpy.
coursera-machine-learning-solutions-python - A repository with solutions to the assignments on Andrew Ng's machine learning MOOC on Coursera
tinygrad - You like pytorch? You like micrograd? You love tinygrad! ❤️ [Moved to: https://github.com/tinygrad/tinygrad]
Removeddit - View deleted stuff from reddit
deeplearning-notes - Notes for Deep Learning Specialization Courses led by Andrew Ng.
ML-From-Scratch - Machine Learning From Scratch. Bare bones NumPy implementations of machine learning models and algorithms with a focus on accessibility. Aims to cover everything from linear regression to deep learning.
py - Repository to store sample python programs for python learning
NNfSiX - Neural Networks from Scratch in various programming languages
RStudio Server - RStudio is an integrated development environment (IDE) for R
yolov7 - Implementation of paper - YOLOv7: Trainable bag-of-freebies sets new state-of-the-art for real-time object detectors
ml-coursera-python-assignments-master - Python Machine Learning Exercises