micrograd VS llama2.c

Compare micrograd vs llama2.c and see what are their differences.

micrograd

A tiny scalar-valued autograd engine and a neural net library on top of it with PyTorch-like API (by karpathy)

llama2.c

Inference Llama 2 in one file of pure C (by karpathy)
Our great sponsors
  • InfluxDB - Power Real-Time Data Analytics at Scale
  • WorkOS - The modern identity platform for B2B SaaS
  • SaaSHub - Software Alternatives and Reviews
micrograd llama2.c
22 13
8,273 15,942
- -
0.0 9.2
5 days ago 3 days ago
Jupyter Notebook C
MIT License MIT License
The number of mentions indicates the total number of mentions that we've tracked plus the number of user suggested alternatives.
Stars - the number of stars that a project has on GitHub. Growth - month over month growth in stars.
Activity is a relative number indicating how actively a project is being developed. Recent commits have higher weight than older ones.
For example, an activity of 9.0 indicates that a project is amongst the top 10% of the most actively developed projects that we are tracking.

micrograd

Posts with mentions or reviews of micrograd. We have used some of these posts to build our list of alternatives and similar projects. The last one was on 2024-03-20.
  • Micrograd-CUDA: adapting Karpathy's tiny autodiff engine for GPU acceleration
    3 projects | news.ycombinator.com | 20 Mar 2024
    I recently decided to turbo-teach myself basic cuda with a proper project. I really enjoyed Karpathy’s micrograd (https://github.com/karpathy/micrograd), so I extended it with cuda kernels and 2D tensor logic. It’s a bit longer than the original project, but it’s still very readable for anyone wanting to quickly learn about gpu acceleration in practice.
  • Stuff we figured out about AI in 2023
    5 projects | news.ycombinator.com | 1 Jan 2024
    FOr inference, less than 1KLOC of pure, dependency-free C is enough (if you include the tokenizer and command line parsing)[1]. This was a non-obvious fact for me, in principle, you could run a modern LLM 20 years ago with just 1000 lines of code, assuming you're fine with things potentially taking days to run of course.

    Training wouldn't be that much harder, Micrograd[2] is 200LOC of pure Python, 1000 lines would probably be enough for training an (extremely slow) LLM. By "extremely slow", I mean that a training run that normally takes hours could probably take dozens of years, but the results would, in principle, be the same.

    If you were writing in C instead of Python and used something like Llama CPP's optimization tricks, you could probably get somewhat acceptable training performance in 2 or 3 KLOC. You'd still be off by one or two orders of magnitude when compared to a GPU cluster, but a lot better than naive, loopy Python.

    [1] https://github.com/karpathy/llama2.c

    [2] https://github.com/karpathy/micrograd

  • Writing a C compiler in 500 lines of Python
    4 projects | news.ycombinator.com | 4 Sep 2023
    Perhaps they were thinking of https://github.com/karpathy/micrograd
  • Linear Algebra for Programmers
    4 projects | news.ycombinator.com | 1 Sep 2023
  • Understanding Automatic Differentiation in 30 lines of Python
    9 projects | news.ycombinator.com | 24 Aug 2023
  • Newbie question: Is there overloading of Haskell function signature?
    1 project | /r/haskell | 26 May 2023
    I was (for fun) trying to recreate micrograd in Haskell. The ideia is simple:
  • [D] Backpropagation is not just the chain-rule, then what is it?
    2 projects | /r/MachineLearning | 18 May 2023
    Check out this repo I found a few years back when I was looking into understanding pytorch better. It's basically a super tiny autodiff library that only works on scalars. The whole repo is under 200 lines of code, so you can pull up pycharm or whatever and step through the code and see how it all comes together. Or... you know. Just read it, it's not super complicated.
  • Neural Networks: Zero to Hero
    5 projects | news.ycombinator.com | 5 Apr 2023
    I'm doing an ML apprenticeship [1] these weeks and Karpathy's videos are part of it. We've been deep down into them. I found them excellent. All concepts he illustrates are crystal clear in his mind (even though they are complicated concepts themselves) and that shows in his explanations.

    Also, the way he builds up everything is magnificent. Starting from basic python classes, to derivatives and gradient descent, to micrograd [2] and then from a bigram counting model [3] to makemore [4] and nanoGPT [5]

    [1]: https://www.foundersandcoders.com/ml

    [2]: https://github.com/karpathy/micrograd

    [3]: https://github.com/karpathy/randomfun/blob/master/lectures/m...

    [4]: https://github.com/karpathy/makemore

    [5]: https://github.com/karpathy/nanoGPT

  • Rustygrad - A tiny Autograd engine inspired by micrograd
    2 projects | /r/rust | 7 Mar 2023
    Just published my first crate, rustygrad, a Rust implementation of Andrej Karpathy's micrograd!
  • Hey Rustaceans! Got a question? Ask here (10/2023)!
    6 projects | /r/rust | 6 Mar 2023
    I've been trying to reimplement Karpathy's micrograd library in rust as a fun side project.

llama2.c

Posts with mentions or reviews of llama2.c. We have used some of these posts to build our list of alternatives and similar projects. The last one was on 2024-01-01.
  • Stuff we figured out about AI in 2023
    5 projects | news.ycombinator.com | 1 Jan 2024
    FOr inference, less than 1KLOC of pure, dependency-free C is enough (if you include the tokenizer and command line parsing)[1]. This was a non-obvious fact for me, in principle, you could run a modern LLM 20 years ago with just 1000 lines of code, assuming you're fine with things potentially taking days to run of course.

    Training wouldn't be that much harder, Micrograd[2] is 200LOC of pure Python, 1000 lines would probably be enough for training an (extremely slow) LLM. By "extremely slow", I mean that a training run that normally takes hours could probably take dozens of years, but the results would, in principle, be the same.

    If you were writing in C instead of Python and used something like Llama CPP's optimization tricks, you could probably get somewhat acceptable training performance in 2 or 3 KLOC. You'd still be off by one or two orders of magnitude when compared to a GPU cluster, but a lot better than naive, loopy Python.

    [1] https://github.com/karpathy/llama2.c

    [2] https://github.com/karpathy/micrograd

  • Minimal neural network implementation
    4 projects | /r/C_Programming | 6 Dec 2023
    A bit off topic but ML-guru Mr Karpathy has implemented a state-of-art Llama2 model in a plain C with no dependencies on 3rd party/freeware libraries. See repo.
  • WebLLM: Llama2 in the Browser
    4 projects | news.ycombinator.com | 28 Aug 2023
    Related. I built karpathy’s llama2.c (https://github.com/karpathy/llama2.c) without modifications to WASM and run it in the browser. It was a fun exercise to directly compare native vs. Web perf. Getting 80% of native performance on my M1 Macbook Air and haven’t spent anytime optimizing the WASM side.

    Demo: https://diegomarcos.com/llama2.c-web/

    Code:

  • Lfortran: Modern interactive LLVM-based Fortran compiler
    2 projects | news.ycombinator.com | 28 Aug 2023
    Would be cool for there to be a `llama2.f`, similar to https://github.com/karpathy/llama2.c, to demo it's capabilities
  • Llama2.c L2E LLM – Multi OS Binary and Unikernel Release
    4 projects | news.ycombinator.com | 25 Aug 2023
    This is a fork of https://github.com/karpathy/llama2.c

    karpathy's llama2.c is like llama.cpp but it is written in C and the python training code is available in that same repo. llama2.c's goal is to be a elegant single file C implementation of the inference and an elegant python implementation for training.

    His goal is for people to understand how llama 2 and LLM's work, so he keeps it simple and sweet. As the project progresses, so will features and performance improvements added.

    Currently it can infer baby (small) Story models trained by Karpathy at a fast pace. It can also infer Meta LLAMA 2 7b models, but at a very slow rate such as 1 token per second.

    So currently this can be used for learning or as a tech preview.

    Our friendly fork tries to make it portable, performant and more usable (bells and whistles) over time. Since we mirror upstream closely, the inference capabilities of our fork is similar but slightly faster if compiled with acceleration. What we try to do different is that we try to make this bootable (not there yet) and portable. Right now you can get binary portablity - use the same run.com on any x86_64 machine running on any OS, it will work (possible due to cosmopolitan toolchain). The other part that works is unikernels - boot this as unikernel in VM's (possible due unikraft unikernel & toolchain).

    See our fork currently as a release early and release often toy tech demo. We plan to build it out into a useful product.

  • FLaNK Stack Weekly for 14 Aug 2023
    32 projects | dev.to | 14 Aug 2023
  • Adding LLaMa2.c support for Web with GGML.JS
    2 projects | /r/LocalLLaMA | 14 Aug 2023
    In my latest release of ggml.js, I've added support for Karapathy's llama2.c model.
  • Beginner's Guide to Llama Models
    2 projects | news.ycombinator.com | 12 Aug 2023
    I really enjoyed Anrej Kaparthy's llama2.c project (https://github.com/karpathy/llama2.c), which runs through creating and running a miniature Llama2 architecture model from scratch.
  • How to scale LLMs better with an alternative to transformers
    1 project | news.ycombinator.com | 27 Jul 2023
    - https://github.com/karpathy/llama2.c

    I think there may be some applications in this limited space that are worth looking into. You won’t replicate GPT-anything but it may be possible to solve some nice problems very much more efficiently that one would expect at first.

  • A simple guide to fine-tuning Llama 2
    1 project | news.ycombinator.com | 27 Jul 2023
    It does now: https://github.com/karpathy/llama2.c#metas-llama-2-models

What are some alternatives?

When comparing micrograd and llama2.c you can also consider the following projects:

deepnet - Educational deep learning library in plain Numpy.

llama2.c - Llama 2 Everywhere (L2E)

tinygrad - You like pytorch? You like micrograd? You love tinygrad! ❤️ [Moved to: https://github.com/tinygrad/tinygrad]

fastGPT - Fast GPT-2 inference written in Fortran

deeplearning-notes - Notes for Deep Learning Specialization Courses led by Andrew Ng.

CML_AMP_Churn_Prediction_mlflow - Build an scikit-learn model to predict churn using customer telco data.

ML-From-Scratch - Machine Learning From Scratch. Bare bones NumPy implementations of machine learning models and algorithms with a focus on accessibility. Aims to cover everything from linear regression to deep learning.

awesome-data-temporality - A curated list to help you manage temporal data across many modalities 🚀.

NNfSiX - Neural Networks from Scratch in various programming languages

dify - Dify is an open-source LLM app development platform. Dify's intuitive interface combines AI workflow, RAG pipeline, agent capabilities, model management, observability features and more, letting you quickly go from prototype to production.

yolov7 - Implementation of paper - YOLOv7: Trainable bag-of-freebies sets new state-of-the-art for real-time object detectors

feldera - Feldera Continuous Analytics Platform