alpaca-7b-truss VS stanford_alpaca

Compare alpaca-7b-truss vs stanford_alpaca and see what are their differences.

stanford_alpaca

Code and documentation to train Stanford's Alpaca models, and generate the data. (by tatsu-lab)
InfluxDB - Power Real-Time Data Analytics at Scale
Get real-time insights from all types of time series data with InfluxDB. Ingest, query, and analyze billions of data points in real-time with unbounded cardinality.
www.influxdata.com
featured
SaaSHub - Software Alternatives and Reviews
SaaSHub helps you find the best software and product alternatives
www.saashub.com
featured
alpaca-7b-truss stanford_alpaca
2 108
317 28,893
- 1.0%
6.0 2.0
11 months ago 2 months ago
Python Python
- Apache License 2.0
The number of mentions indicates the total number of mentions that we've tracked plus the number of user suggested alternatives.
Stars - the number of stars that a project has on GitHub. Growth - month over month growth in stars.
Activity is a relative number indicating how actively a project is being developed. Recent commits have higher weight than older ones.
For example, an activity of 9.0 indicates that a project is amongst the top 10% of the most actively developed projects that we are tracking.

alpaca-7b-truss

Posts with mentions or reviews of alpaca-7b-truss. We have used some of these posts to build our list of alternatives and similar projects. The last one was on 2023-03-22.
  • [Project] ChatLLaMA - A ChatGPT style chatbot for Facebook's LLaMA
    1 project | /r/MachineLearning | 22 Mar 2023
    If you want deploy your own instance is the model powering the chatbot and build something similar we've open sourced the Truss here: https://github.com/basetenlabs/alpaca-7b-truss
  • Show HN: ChatLLaMA – A ChatGPT style chatbot for Facebook's LLaMA
    10 projects | news.ycombinator.com | 22 Mar 2023
    ChatLLaMA is an experimental chatbot interface for interacting with variants of Facebook's LLaMA. Currently, we support the 7 billion parameter variant that was fine-tuned on the Alpaca dataset. This early versions isn't as conversational as we'd like, but over the next week or so, we're planning on adding support for the 30 billion parameter variant, another variant fine-tuned on LAION's OpenAssistant dataset and more as we explore what this model is capable of.

    If you want deploy your own instance is the model powering the chatbot and build something similar we've open sourced the Truss here: https://github.com/basetenlabs/alpaca-7b-truss

    We'd love to hear any feedback you have. You can reach me on Twitter @aaronrelph or Abu (the engineer behind this) @aqaderb.

    Disclaimer: We both work at Baseten. This was a weekend project. Not trying to shill anything; just want to build and share cool stuff.

stanford_alpaca

Posts with mentions or reviews of stanford_alpaca. We have used some of these posts to build our list of alternatives and similar projects. The last one was on 2023-12-19.
  • How Open is Generative AI? Part 2
    8 projects | dev.to | 19 Dec 2023
    Alpaca is an instruction-oriented LLM derived from LLaMA, enhanced by Stanford researchers with a dataset of 52,000 examples of following instructions, sourced from OpenAI’s InstructGPT through the self-instruct method. The extensive self-instruct dataset, details of data generation, and the model refinement code were publicly disclosed. This model complies with the licensing requirements of its base model. Due to the utilization of InstructGPT for data generation, it also adheres to OpenAI’s usage terms, which prohibit the creation of models competing with OpenAI. This illustrates how dataset restrictions can indirectly affect the resulting fine-tuned model.
  • Ask HN: AI/ML papers to catch up with current state of AI?
    3 projects | news.ycombinator.com | 15 Dec 2023
  • OpenAI board in discussions with Sam Altman to return as CEO
    1 project | news.ycombinator.com | 19 Nov 2023
  • Are there any AI like ChatGPT without content restrictions?
    1 project | /r/OpenAI | 3 Oct 2023
  • Fine-tuning LLMs with LoRA: A Gentle Introduction
    3 projects | dev.to | 22 Aug 2023
    In this article, we're going to experiment with LoRA and fine-tune Llama Alpaca using commercial hardware.
  • Creating a new Finetuned model
    3 projects | /r/LocalLLaMA | 11 Jul 2023
    Most papers I did read showed at least a thousand, even 10000 at several cases, so I assumed that to be the trend in the case of Low rank adapter(PEFT) training.(source: [2305.14314] QLoRA: Efficient Finetuning of Quantized LLMs (arxiv.org) , Stanford CRFM (Alpaca) and the minimum being openchat/openchat · Hugging Face ; There are a lot more examples)
  • Shock tick up for wage growth to 7.3% in blow for Bank of England
    1 project | /r/unitedkingdom | 11 Jul 2023
    I'm not talking about OpenAI ChatGPT I'm talking about things ALPACA, and where did they train these models? Off the existing models for a fraction of a fraction of a fraction of the cost: https://crfm.stanford.edu/2023/03/13/alpaca.html
  • Bye bye Bing
    5 projects | /r/ChatGPT | 30 Jun 2023
  • The idea maze for AI startups (2015)
    2 projects | news.ycombinator.com | 28 Jun 2023
    I think there's a new approach for “How do you get the data?” that wasn't available when this article was written in 2015. The new text and image generative models can now be used to synthesize training datasets.

    I was working on an typing autocorrect project and needed a corpus of "text messages". Most of the traditional NLP corpuses like those available through NLTK [0] aren't suitable. But it was easy to script ChatGPT to generate thousands of believable text messages by throwing random topics at it.

    Similarly, you can synthesize a training dataset by giving GPT the outputs/labels and asking it to generate a variety of inputs. For sentiment analysis... "Give me 1000 negative movie reviews" and "Now give me 1000 positive movie reviews".

    The Alpaca folks used GPT-3 to generate high-quality instruction-following datasets [1] based on a small set of human samples.

    Etc.

    [0] https://www.nltk.org/nltk_data/

    [1] https://crfm.stanford.edu/2023/03/13/alpaca.html

  • Repos and tutorials for a full finetune (not LoRA)
    1 project | /r/LocalLLaMA | 2 Jun 2023
    AFAIK, the original alpaca repo was a full finetune. https://github.com/tatsu-lab/stanford_alpaca

What are some alternatives?

When comparing alpaca-7b-truss and stanford_alpaca you can also consider the following projects:

hh-rlhf - Human preference data for "Training a Helpful and Harmless Assistant with Reinforcement Learning from Human Feedback"

alpaca-lora - Instruct-tune LLaMA on consumer hardware

chatllama - ChatLLaMA 📢 Open source implementation for LLaMA-based ChatGPT runnable in a single GPU. 15x faster training process than ChatGPT

ChatGLM-6B - ChatGLM-6B: An Open Bilingual Dialogue Language Model | 开源双语对话语言模型

nebuly - The user analytics platform for LLMs

Open-Assistant - OpenAssistant is a chat-based assistant that understands tasks, can interact with third-party systems, and retrieve information dynamically to do so.

llama.cpp - LLM inference in C/C++

LLM-As-Chatbot - LLM as a Chatbot Service

GPTQ-for-LLaMa - 4 bits quantization of LLaMA using GPTQ

text-generation-webui - A Gradio web UI for Large Language Models. Supports transformers, GPTQ, AWQ, EXL2, llama.cpp (GGUF), Llama models.

Alpaca-Turbo - Web UI to run alpaca model locally