alpaca.cpp VS stanford_alpaca

Compare alpaca.cpp vs stanford_alpaca and see what are their differences.

alpaca.cpp

Locally run an Instruction-Tuned Chat-Style LLM (by antimatter15)

stanford_alpaca

Code and documentation to train Stanford's Alpaca models, and generate the data. (by tatsu-lab)
Our great sponsors
  • WorkOS - The modern identity platform for B2B SaaS
  • InfluxDB - Power Real-Time Data Analytics at Scale
  • SaaSHub - Software Alternatives and Reviews
alpaca.cpp stanford_alpaca
94 108
9,878 28,761
- 1.3%
9.4 2.0
about 1 year ago about 1 month ago
C Python
MIT License Apache License 2.0
The number of mentions indicates the total number of mentions that we've tracked plus the number of user suggested alternatives.
Stars - the number of stars that a project has on GitHub. Growth - month over month growth in stars.
Activity is a relative number indicating how actively a project is being developed. Recent commits have higher weight than older ones.
For example, an activity of 9.0 indicates that a project is amongst the top 10% of the most actively developed projects that we are tracking.

alpaca.cpp

Posts with mentions or reviews of alpaca.cpp. We have used some of these posts to build our list of alternatives and similar projects. The last one was on 2024-03-31.

stanford_alpaca

Posts with mentions or reviews of stanford_alpaca. We have used some of these posts to build our list of alternatives and similar projects. The last one was on 2023-12-19.
  • How Open is Generative AI? Part 2
    8 projects | dev.to | 19 Dec 2023
    Alpaca is an instruction-oriented LLM derived from LLaMA, enhanced by Stanford researchers with a dataset of 52,000 examples of following instructions, sourced from OpenAI’s InstructGPT through the self-instruct method. The extensive self-instruct dataset, details of data generation, and the model refinement code were publicly disclosed. This model complies with the licensing requirements of its base model. Due to the utilization of InstructGPT for data generation, it also adheres to OpenAI’s usage terms, which prohibit the creation of models competing with OpenAI. This illustrates how dataset restrictions can indirectly affect the resulting fine-tuned model.
  • Ask HN: AI/ML papers to catch up with current state of AI?
    3 projects | news.ycombinator.com | 15 Dec 2023
  • OpenAI board in discussions with Sam Altman to return as CEO
    1 project | news.ycombinator.com | 19 Nov 2023
  • Are there any AI like ChatGPT without content restrictions?
    1 project | /r/OpenAI | 3 Oct 2023
  • Fine-tuning LLMs with LoRA: A Gentle Introduction
    3 projects | dev.to | 22 Aug 2023
    In this article, we're going to experiment with LoRA and fine-tune Llama Alpaca using commercial hardware.
  • Creating a new Finetuned model
    3 projects | /r/LocalLLaMA | 11 Jul 2023
    Most papers I did read showed at least a thousand, even 10000 at several cases, so I assumed that to be the trend in the case of Low rank adapter(PEFT) training.(source: [2305.14314] QLoRA: Efficient Finetuning of Quantized LLMs (arxiv.org) , Stanford CRFM (Alpaca) and the minimum being openchat/openchat · Hugging Face ; There are a lot more examples)
  • Shock tick up for wage growth to 7.3% in blow for Bank of England
    1 project | /r/unitedkingdom | 11 Jul 2023
    I'm not talking about OpenAI ChatGPT I'm talking about things ALPACA, and where did they train these models? Off the existing models for a fraction of a fraction of a fraction of the cost: https://crfm.stanford.edu/2023/03/13/alpaca.html
  • Bye bye Bing
    5 projects | /r/ChatGPT | 30 Jun 2023
  • The idea maze for AI startups (2015)
    2 projects | news.ycombinator.com | 28 Jun 2023
    I think there's a new approach for “How do you get the data?” that wasn't available when this article was written in 2015. The new text and image generative models can now be used to synthesize training datasets.

    I was working on an typing autocorrect project and needed a corpus of "text messages". Most of the traditional NLP corpuses like those available through NLTK [0] aren't suitable. But it was easy to script ChatGPT to generate thousands of believable text messages by throwing random topics at it.

    Similarly, you can synthesize a training dataset by giving GPT the outputs/labels and asking it to generate a variety of inputs. For sentiment analysis... "Give me 1000 negative movie reviews" and "Now give me 1000 positive movie reviews".

    The Alpaca folks used GPT-3 to generate high-quality instruction-following datasets [1] based on a small set of human samples.

    Etc.

    [0] https://www.nltk.org/nltk_data/

    [1] https://crfm.stanford.edu/2023/03/13/alpaca.html

  • Repos and tutorials for a full finetune (not LoRA)
    1 project | /r/LocalLLaMA | 2 Jun 2023
    AFAIK, the original alpaca repo was a full finetune. https://github.com/tatsu-lab/stanford_alpaca

What are some alternatives?

When comparing alpaca.cpp and stanford_alpaca you can also consider the following projects:

gpt4all - gpt4all: run open-source LLMs anywhere

alpaca-lora - Instruct-tune LLaMA on consumer hardware

llama.cpp - LLM inference in C/C++

ChatGLM-6B - ChatGLM-6B: An Open Bilingual Dialogue Language Model | 开源双语对话语言模型

coral-pi-rest-server - Perform inferencing of tensorflow-lite models on an RPi with acceleration from Coral USB stick

Open-Assistant - OpenAssistant is a chat-based assistant that understands tasks, can interact with third-party systems, and retrieve information dynamically to do so.

ggml - Tensor library for machine learning

text-generation-webui - A Gradio web UI for Large Language Models. Supports transformers, GPTQ, AWQ, EXL2, llama.cpp (GGUF), Llama models.

GPTQ-for-LLaMa - 4 bits quantization of LLaMA using GPTQ

Alpaca-Turbo - Web UI to run alpaca model locally