hyperlearn VS gpt-fast

Compare hyperlearn vs gpt-fast and see what are their differences.

gpt-fast

Simple and efficient pytorch-native transformer text generation in <1000 LOC of python. (by pytorch-labs)
Our great sponsors
  • InfluxDB - Power Real-Time Data Analytics at Scale
  • WorkOS - The modern identity platform for B2B SaaS
  • SaaSHub - Software Alternatives and Reviews
hyperlearn gpt-fast
4 8
1,578 5,076
4.3% 5.9%
0.0 8.3
over 1 year ago 4 days ago
Jupyter Notebook Python
Apache License 2.0 BSD 3-clause "New" or "Revised" License
The number of mentions indicates the total number of mentions that we've tracked plus the number of user suggested alternatives.
Stars - the number of stars that a project has on GitHub. Growth - month over month growth in stars.
Activity is a relative number indicating how actively a project is being developed. Recent commits have higher weight than older ones.
For example, an activity of 9.0 indicates that a project is amongst the top 10% of the most actively developed projects that we are tracking.

hyperlearn

Posts with mentions or reviews of hyperlearn. We have used some of these posts to build our list of alternatives and similar projects. The last one was on 2023-12-01.
  • 80% faster, 50% less memory, 0% accuracy loss Llama finetuning
    3 projects | news.ycombinator.com | 1 Dec 2023
    I agree fully - what do you suggest then? OSS the entire code base and using AGPL3? I tried that with https://github.com/danielhanchen/hyperlearn to no avail - we couldn't even monetize it at all, so I just OSSed everything.

    I listed all the research articles and methods in Hyperlearn which in the end were gobbled up by other packages.

    We still have to cover life expenses and stuff sadly as a startup.

    Do you have any suggestions how we could go about this? We thought maybe an actual training / inference platform, and not even OSSing any code, but we decided against this, so we OSSed some code.

    Ay suggestions are welcome!

  • 80% faster, 50% less memory, 0% loss of accuracy Llama finetuning
    6 projects | news.ycombinator.com | 1 Dec 2023
    Good point - the main issue is we encountered this exact issue with our old package Hyperlearn (https://github.com/danielhanchen/hyperlearn).

    I OSSed all the code to the community - I'm actually an extremely open person and I love contributing to the OSS community.

    The issue was the package got gobbled up by other startups and big tech companies with no credit - I didn't want any cash from it, but it stung and hurt really bad hearing other startups and companies claim it was them who made it faster, whilst it was actually my work. It hurt really bad - as an OSS person, I don't want money, but just some recognition for the work.

    I also used to accept and help everyone with their writing their startup's software, but I never got paid or even any thanks - sadly I didn't expect the world to be such a hostile place.

    So after a sad awakening, I decided with my brother instead of OSSing everything, we would first OSS something which is still very good - 5X faster training is already very reasonable.

    I'm all open to other suggestions on how we should approach this though! There are no evil intentions - in fact I insisted we OSS EVERYTHING even the 30x faster algos, but after a level headed discussion with my brother - we still have to pay life expenses no?

    If you have other ways we can go about this - I'm all ears!! We're literally making stuff up as we go along!

  • [Project] BFLOAT16 on ALL hardware (&gt;= 2009), up to 2000x faster ML algos, 50% less RAM usage for all old/new hardware - Hyperlearn Reborn.
    2 projects | /r/MachineLearning | 2 Jun 2022
    Hello everyone!! It's been a while!! Years back I released Hyperlearn https://github.com/danielhanchen/hyperlearn. It has 1.2K Github stars, where I made tonnes of algos faster:

gpt-fast

Posts with mentions or reviews of gpt-fast. We have used some of these posts to build our list of alternatives and similar projects. The last one was on 2023-12-06.

What are some alternatives?

When comparing hyperlearn and gpt-fast you can also consider the following projects:

data-science-notes - Notes of IBM Data Science Professional Certificate Courses on Coursera

unsloth - Finetune Llama 3, Mistral & Gemma LLMs 2-5x faster with 80% less memory

notebooks - Implement, demonstrate, reproduce and extend the results of the Risk articles 'Differential Machine Learning' (2020) and 'PCA with a Difference' (2021) by Huge and Savine, and cover implementation details left out from the papers.

TensorRT-LLM - TensorRT-LLM provides users with an easy-to-use Python API to define Large Language Models (LLMs) and build TensorRT engines that contain state-of-the-art optimizations to perform inference efficiently on NVIDIA GPUs. TensorRT-LLM also contains components to create Python and C++ runtimes that execute those TensorRT engines.

ocaml-torch - OCaml bindings for PyTorch

stable-fast - Best inference performance optimization framework for HuggingFace Diffusers on NVIDIA GPUs.

DiffSharp - DiffSharp: Differentiable Functional Programming

optimum-nvidia

MegEngine - MegEngine 是一个快速、可拓展、易于使用且支持自动求导的深度学习框架

segment-anything-fast - A batched offline inference oriented version of segment-anything

python-machine-learning-book - The "Python Machine Learning (1st edition)" book code repository and info resource

deep-learning-v2-pytorch - Projects and exercises for the latest Deep Learning ND program https://www.udacity.com/course/deep-learning-nanodegree--nd101