micrograd VS hlb-gpt

Compare micrograd vs hlb-gpt and see what are their differences.

micrograd

A tiny scalar-valued autograd engine and a neural net library on top of it with PyTorch-like API (by karpathy)

hlb-gpt

Minimalistic, extremely fast, and hackable researcher's toolbench for GPT models in 307 lines of code. Reaches <3.8 validation loss on wikitext-103 on a single A100 in <100 seconds. Scales to larger models with one parameter change (feature currently in alpha). (by tysam-code)
InfluxDB - Power Real-Time Data Analytics at Scale
Get real-time insights from all types of time series data with InfluxDB. Ingest, query, and analyze billions of data points in real-time with unbounded cardinality.
www.influxdata.com
featured
SaaSHub - Software Alternatives and Reviews
SaaSHub helps you find the best software and product alternatives
www.saashub.com
featured
micrograd hlb-gpt
22 5
8,447 251
- -
0.0 3.7
8 days ago about 2 months ago
Jupyter Notebook Python
MIT License Apache License 2.0
The number of mentions indicates the total number of mentions that we've tracked plus the number of user suggested alternatives.
Stars - the number of stars that a project has on GitHub. Growth - month over month growth in stars.
Activity is a relative number indicating how actively a project is being developed. Recent commits have higher weight than older ones.
For example, an activity of 9.0 indicates that a project is amongst the top 10% of the most actively developed projects that we are tracking.

micrograd

Posts with mentions or reviews of micrograd. We have used some of these posts to build our list of alternatives and similar projects. The last one was on 2024-03-20.
  • Micrograd-CUDA: adapting Karpathy's tiny autodiff engine for GPU acceleration
    3 projects | news.ycombinator.com | 20 Mar 2024
    I recently decided to turbo-teach myself basic cuda with a proper project. I really enjoyed Karpathy’s micrograd (https://github.com/karpathy/micrograd), so I extended it with cuda kernels and 2D tensor logic. It’s a bit longer than the original project, but it’s still very readable for anyone wanting to quickly learn about gpu acceleration in practice.
  • Stuff we figured out about AI in 2023
    5 projects | news.ycombinator.com | 1 Jan 2024
    FOr inference, less than 1KLOC of pure, dependency-free C is enough (if you include the tokenizer and command line parsing)[1]. This was a non-obvious fact for me, in principle, you could run a modern LLM 20 years ago with just 1000 lines of code, assuming you're fine with things potentially taking days to run of course.

    Training wouldn't be that much harder, Micrograd[2] is 200LOC of pure Python, 1000 lines would probably be enough for training an (extremely slow) LLM. By "extremely slow", I mean that a training run that normally takes hours could probably take dozens of years, but the results would, in principle, be the same.

    If you were writing in C instead of Python and used something like Llama CPP's optimization tricks, you could probably get somewhat acceptable training performance in 2 or 3 KLOC. You'd still be off by one or two orders of magnitude when compared to a GPU cluster, but a lot better than naive, loopy Python.

    [1] https://github.com/karpathy/llama2.c

    [2] https://github.com/karpathy/micrograd

  • Writing a C compiler in 500 lines of Python
    4 projects | news.ycombinator.com | 4 Sep 2023
    Perhaps they were thinking of https://github.com/karpathy/micrograd
  • Linear Algebra for Programmers
    4 projects | news.ycombinator.com | 1 Sep 2023
  • Understanding Automatic Differentiation in 30 lines of Python
    9 projects | news.ycombinator.com | 24 Aug 2023
  • Newbie question: Is there overloading of Haskell function signature?
    1 project | /r/haskell | 26 May 2023
    I was (for fun) trying to recreate micrograd in Haskell. The ideia is simple:
  • [D] Backpropagation is not just the chain-rule, then what is it?
    2 projects | /r/MachineLearning | 18 May 2023
    Check out this repo I found a few years back when I was looking into understanding pytorch better. It's basically a super tiny autodiff library that only works on scalars. The whole repo is under 200 lines of code, so you can pull up pycharm or whatever and step through the code and see how it all comes together. Or... you know. Just read it, it's not super complicated.
  • Neural Networks: Zero to Hero
    5 projects | news.ycombinator.com | 5 Apr 2023
    I'm doing an ML apprenticeship [1] these weeks and Karpathy's videos are part of it. We've been deep down into them. I found them excellent. All concepts he illustrates are crystal clear in his mind (even though they are complicated concepts themselves) and that shows in his explanations.

    Also, the way he builds up everything is magnificent. Starting from basic python classes, to derivatives and gradient descent, to micrograd [2] and then from a bigram counting model [3] to makemore [4] and nanoGPT [5]

    [1]: https://www.foundersandcoders.com/ml

    [2]: https://github.com/karpathy/micrograd

    [3]: https://github.com/karpathy/randomfun/blob/master/lectures/m...

    [4]: https://github.com/karpathy/makemore

    [5]: https://github.com/karpathy/nanoGPT

  • Rustygrad - A tiny Autograd engine inspired by micrograd
    2 projects | /r/rust | 7 Mar 2023
    Just published my first crate, rustygrad, a Rust implementation of Andrej Karpathy's micrograd!
  • Hey Rustaceans! Got a question? Ask here (10/2023)!
    6 projects | /r/rust | 6 Mar 2023
    I've been trying to reimplement Karpathy's micrograd library in rust as a fun side project.

hlb-gpt

Posts with mentions or reviews of hlb-gpt. We have used some of these posts to build our list of alternatives and similar projects. The last one was on 2023-05-23.
  • In Defense of Pure 16-Bit Floating-Point Neural Networks
    2 projects | news.ycombinator.com | 23 May 2023
    As a practitioner specializing in extremely fast-training neural networks, seeing a paper in 2023 considering fp32 as a gold standard over pure non-mixed fp16/bp16 is a bit shocking to me and feels dated/distracting from the discussion. They make good points but unless I am hopelessly misinformed, it's been pretty well established at this point in a number of circles that fp32 is overkill for the majority of uses for many modern-day practitioners. Loads of networks train directly in bfloat16 as the standard -- a lot of the modern LLMs among them. Mixed precision is very much no longer needed, not even with fp16 if you're willing to tolerate some range hacks. If you don't want the range hacks, just use bfloat16 directly. The complexity is not worth it, adds not much at all, and the dynamic loss scaler a lot of people use is just begging for more issues.

    Both of the main repos that I've published in terms of speed benchmarks train directly in pure fp16 and bf16 respectively without any fp32 frippery, if you want to see an example of both paradigms successfully feel free to take a look (I'll note that bf16 is simpler on the whole for a few reasons, generally seamless): https://github.com/tysam-code/hlb-CIFAR10 [for fp16] and https://github.com/tysam-code/hlb-gpt [for bf16]

    Personally from my experience, I think fp16/bf16 is honestly a bit too expressive for what we need, fp8 seems to do just fine and I think will be quite alright with some accommodations, just as with pure fp16. The what and the how of that is a story for a different day (and at this point, the max pooling operation is basically one of the slowest now).

    You'll have to excuse my frustration a bit, it just is a bit jarring to see a streetsign from way in the past fly forward in the wind to hit you in the face before tumbling on its merry way. And additionally in the comment section the general discussion doesn't seem to talk about what seems to be a pretty clearly-established consensus in certain research circles. It's not really too much of a debate anymore, it works and we're off to bigger and better problems that I think we should talk about. I guess in one sense it does justify the paper's utility, but also a bit frustrating because it normalizes the conversation as a few notches back from where I personally feel that it actually is at the moment.

    We've got to move out of the past, this fp32 business to me personally is like writing a Relu-activated VGG network in Keras on Tensorflow. Phew.

    And while we're at it, if I shall throw my frumpy-grumpy hat right back into the ring, this is an information-theoretic problem! Not enough discussion of Shannon and co. Let's please fix that too. See my other rants for x-references to that, should you be so-inclined to punish yourself in that manner.

  • Neural Networks: Zero to Hero
    5 projects | news.ycombinator.com | 5 Apr 2023
    I made a smaller GPT model that started from Andrej's code that converges to a decent loss in a short amount of time on an A100 -- just under 2.5 minutes or so: https://github.com/tysam-code/hlb-gpt

    With the original hyperparameters, it was 30-60 minutes, with a pruned down network and adjusted hyperparameters, about 6 minutes, and a variety of optimizations beyond that to bring it down.

    If you want the nano-GPT basically feature-identical (but pruned down) version, 0.0.0 at ~6 minutes or so is your best bet.

    You can get A100s cheaply and securely through Colab or LambdaLabs.

  • [P] 10x faster reinforcement learning HPO - now with CNNs!
    3 projects | /r/MachineLearning | 5 Apr 2023
    Check it out! If LLMs are your thing, I did basically the same thing, but for 3.8 val loss on WikiText-103 in maybe 2.3ish minutes or so on an A100: https://github.com/tysam-code/hlb-gpt.
  • MIT 6.S191: Recurrent Neural Networks, Transformers, and Attention
    2 projects | news.ycombinator.com | 2 Apr 2023
    https://github.com/tysam-code/hlb-gpt

    Both of these implementations are pretty straightforward for what they do but CIFAR-10 has less dynamic scheduling and stuff so it might be easier to fit in your head. However, both are meant to be simple (and extremely hackable if you want to poke around and take apart some pieces/add different watchpoints to see how different pieces evolve, etc. I am partially inspired by, among many things, one of those see-through engine kits that I saw in a magazine growing up as a child that I thought was a very cool, dynamic, and hands-on way to just watch how the pieces moved in a difficult topic. Sometimes that is the best way that our brains can learn, though we are all different and learn best differently through different mediums in my experience).

    Feel free to let me know if you have any specific questions and I'll endeavor to do my best to help you here. Welcome to an interest in the field!

    I guess to briefly touch on one topic -- some people focus on the technical only first, like backprop, and though math is required heavily for more advanced research, I don't learn concepts very well through details only. Knowing that backprop is "Calculate the slope for the error in this high-dimensional space for how a neural network was wrong at a certain point, then take a tiny step towards minimizing the error. After N steps, we converge to a representation that is like a zip file of our input data within a mathematical function" is probably enough for 90-95% of the usecases you will do as a ML practitioner, if you do so. The math is cool but there are more important things to sweat over IMO, and I think messaging to the contrary raises the barrier to entry to the field and distracts from the important things, which we do not need as much. It's good to learn after you have space in your brain for it after you understand how the whole thing works together, though that is just my personal opinion after all.

    Much love and care and all that and again feel free to let me know if you have any questions please. :) <3

  • [P] Introducing hlb-gpt: A rapid prototyping toolbench in &lt;350 lines of code to speed up your LLM research exploration
    2 projects | /r/MachineLearning | 5 Mar 2023
    You can find the code for hlb-gpt here: https://github.com/tysam-code/hlb-gpt

What are some alternatives?

When comparing micrograd and hlb-gpt you can also consider the following projects:

deepnet - Educational deep learning library in plain Numpy.

hlb-CIFAR10 - Train CIFAR-10 in <7 seconds on an A100, the current world record.

tinygrad - You like pytorch? You like micrograd? You love tinygrad! ❤️ [Moved to: https://github.com/tinygrad/tinygrad]

randomfun - Notebooks and various random fun

deeplearning-notes - Notes for Deep Learning Specialization Courses led by Andrew Ng.

makemore - An autoregressive character-level language model for making more things

ML-From-Scratch - Machine Learning From Scratch. Bare bones NumPy implementations of machine learning models and algorithms with a focus on accessibility. Aims to cover everything from linear regression to deep learning.

nanoGPT - The simplest, fastest repository for training/finetuning medium-sized GPTs.

NNfSiX - Neural Networks from Scratch in various programming languages

AgileRL - Streamlining reinforcement learning with RLOps. State-of-the-art RL algorithms and tools.

yolov7 - Implementation of paper - YOLOv7: Trainable bag-of-freebies sets new state-of-the-art for real-time object detectors

machine.academy - Neural Network training library in C++ and C# with GPU acceleration