The LLama Effect: Leak Sparked a Series of Open Source Alternatives to ChatGPT

This page summarizes the projects mentioned and recommended in the original post on news.ycombinator.com

Our great sponsors
  • WorkOS - The modern identity platform for B2B SaaS
  • InfluxDB - Power Real-Time Data Analytics at Scale
  • SaaSHub - Software Alternatives and Reviews
  • llama

    Inference code for Llama models

  • I was playing around w/ a lot of these models as well, and was surprised by how bad LLaMA performed vs it's benchmark scores [1][2][3]. However, recently @tyfon mentioned he had great success w/ LLaMA and shared his prompt [4] (based off of more recent work by llama.cpp contributors and it performed much better in my own personal testing.

    There's basically a new fine tune a day and while some I don't like (Alpaca, Vicuna, Baize, Koala are all fine-tuned to be too limiting IMO), I'm interested in what gpt4-x-alpaca and OA (Open Assistant) are doing, and the various un-filtered fine tunes (especially w/ lighter weight adapter/LoRA training which would let you personalize/specialize).

    GPTQ-for-LLaMa let's me load the 4-bit quantized 30B model (~17GiB) onto my GPU in about 5 seconds (and I know llama.cpp's mmap improvements have also made it quite a lot quicker) so I think it's perfectly reasonable to switch between tuned models for tasks in code assistance, correspondence, etc.

    I have access to ChatGPT 4, and agree it's signficantly better than what's out there atm, and it can basically do anything I've thrown at it (here's it helping me with my WM yak shaving: https://sharegpt.com/c/Xv73Vwl or discussing MAPS/psychedelics for clinical applications https://sharegpt.com/c/N3VXFxS - it's amazing what it can pull from memory and it hallucinates much less than 3.5). That being said, I've found the Browsing 3.5 model to be quite useful for doing things like catching up on the last few years of LLM advancements: https://sharegpt.com/c/JFexqvm

    [1] https://github.com/facebookresearch/llama/blob/main/MODEL_CA...

    [2] https://github.com/ggerganov/llama.cpp/discussions/406

    [3] https://paperswithcode.com/sota/language-modelling-on-wikite...

    [4] https://news.ycombinator.com/item?id=35484341

  • llama.cpp

    LLM inference in C/C++

  • I was playing around w/ a lot of these models as well, and was surprised by how bad LLaMA performed vs it's benchmark scores [1][2][3]. However, recently @tyfon mentioned he had great success w/ LLaMA and shared his prompt [4] (based off of more recent work by llama.cpp contributors and it performed much better in my own personal testing.

    There's basically a new fine tune a day and while some I don't like (Alpaca, Vicuna, Baize, Koala are all fine-tuned to be too limiting IMO), I'm interested in what gpt4-x-alpaca and OA (Open Assistant) are doing, and the various un-filtered fine tunes (especially w/ lighter weight adapter/LoRA training which would let you personalize/specialize).

    GPTQ-for-LLaMa let's me load the 4-bit quantized 30B model (~17GiB) onto my GPU in about 5 seconds (and I know llama.cpp's mmap improvements have also made it quite a lot quicker) so I think it's perfectly reasonable to switch between tuned models for tasks in code assistance, correspondence, etc.

    I have access to ChatGPT 4, and agree it's signficantly better than what's out there atm, and it can basically do anything I've thrown at it (here's it helping me with my WM yak shaving: https://sharegpt.com/c/Xv73Vwl or discussing MAPS/psychedelics for clinical applications https://sharegpt.com/c/N3VXFxS - it's amazing what it can pull from memory and it hallucinates much less than 3.5). That being said, I've found the Browsing 3.5 model to be quite useful for doing things like catching up on the last few years of LLM advancements: https://sharegpt.com/c/JFexqvm

    [1] https://github.com/facebookresearch/llama/blob/main/MODEL_CA...

    [2] https://github.com/ggerganov/llama.cpp/discussions/406

    [3] https://paperswithcode.com/sota/language-modelling-on-wikite...

    [4] https://news.ycombinator.com/item?id=35484341

  • WorkOS

    The modern identity platform for B2B SaaS. The APIs are flexible and easy-to-use, supporting authentication, user identity, and complex enterprise features like SSO and SCIM provisioning.

    WorkOS logo
  • dmca

    Repository with text of DMCA takedown notices as received. GitHub does not endorse or adopt any assertion contained in the following notices. Users identified in the notices are presumed innocent until proven guilty. Additional information about our DMCA policy can be found at

  • Meta is actively trying to take down publicly available copies of LLaMA: https://github.com/github/dmca/blob/master/2023/03/2023-03-2...

  • FastChat

    An open platform for training, serving, and evaluating large language models. Release repo for Vicuna and Chatbot Arena.

  • This is incorrect. According to the official https://github.com/lm-sys/FastChat#vicuna-weights you need the original Llama weights before applying the Vicuna diff.

  • GPTQ-triton

    GPTQ inference Triton kernel

  • Slightly tangential, but I had intended to start playing around with LLaMA and building some agents. I got the 4-bit versions up and running on my 3090 before I was quickly nerd snipped by a performance problem...

    The popular repo for quantizing and running LLaMA is the GPTQ-for-llama repo on github, which mostly copies from the GPTQ authors. The CUDA kernels are needed to support the specific kind of quantization that GPTQ does.

    Problem is, while those CUDA kernels are great at short prompt lengths, they fall apart at long prompt lengths. You could see people complaining about this, seeing their inference speeds slowly tanking as their chats/prompts/etc got longer.

    So off I went, spending the last week or so re-writing the kernels in Triton. I've now got my kernels running faster than the CUDA kernels at all sizes [0]. And I'm busily optimizing and fusing other areas. The latest MLP fusion kernels gave another couple percentage boost in performance.

    Yet I still haven't actually played with LLaMA and made those agents I wanted... sigh And now I'm debating diving into the Triton source code, because they removed integer unpacking instructions during one of their recent rewrites. So I had to use a hack in my kernels which causes them to use more bandwidth than they otherwise should. Think of the performance they could have with those! ... (someone please stop me...)

    [0] https://github.com/fpgaminer/GPTQ-triton/

  • alpaca-lora

    Instruct-tune LLaMA on consumer hardware

  • I'm looking to get my hands on an RTX 4090 to ingest all of the repair manuals of a certain company and have a chatbot capable of guiding repairs, or at least try to do so. So far doing inference only as well.

    [1] https://github.com/tloen/alpaca-lora

NOTE: The number of mentions on this list indicates mentions on common posts plus user suggested alternatives. Hence, a higher number means a more popular project.

Suggest a related project

Related posts