80% faster, 50% less memory, 0% accuracy loss Llama finetuning

This page summarizes the projects mentioned and recommended in the original post on news.ycombinator.com

Our great sponsors
  • InfluxDB - Power Real-Time Data Analytics at Scale
  • WorkOS - The modern identity platform for B2B SaaS
  • SaaSHub - Software Alternatives and Reviews
  • hyperlearn

    2-2000x faster ML algos, 50% less memory usage, works on all hardware - new and old.

  • Sorry about that - I'm super new to pricing and stuff so it might seem off since I'm literally making the plans with my bro as we go along.

    If you don't believe the timings, I was the author of Hyperlearn https://github.com/danielhanchen/hyperlearn which makes ML faster - I also listed the papers which cite the algos.

    I also used to work at NVIDIA making TSNE 2000x faster on GPUs and some other algos like Randomized SVD, sparse matrix multiplies etc.

    If you have any suggestions on a more appropriate pricing strategy - I'm all ears!!

    I really don't know much about pricing and the open core model, so I'm making stuff up literally.

  • unsloth

    Finetune Llama 3, Mistral & Gemma LLMs 2-5x faster with 80% less memory

  • This seems to just be a link to the Unsloth Github repo[0], which in turn is the free version of Unsloth Pro/Max[1]. Maybe the link should be changed?

    [0]: https://github.com/unslothai/unsloth

  • InfluxDB

    Power Real-Time Data Analytics at Scale. Get real-time insights from all types of time series data with InfluxDB. Ingest, query, and analyze billions of data points in real-time with unbounded cardinality.

    InfluxDB logo
NOTE: The number of mentions on this list indicates mentions on common posts plus user suggested alternatives. Hence, a higher number means a more popular project.

Suggest a related project

Related posts