SaaSHub helps you find the best software and product alternatives Learn more →
Stanford_alpaca Alternatives
Similar projects and alternatives to stanford_alpaca
-
text-generation-webui
A Gradio web UI for Large Language Models. Supports transformers, GPTQ, AWQ, EXL2, llama.cpp (GGUF), Llama models.
-
WorkOS
The modern identity platform for B2B SaaS. The APIs are flexible and easy-to-use, supporting authentication, user identity, and complex enterprise features like SSO and SCIM provisioning.
-
Open-Assistant
OpenAssistant is a chat-based assistant that understands tasks, can interact with third-party systems, and retrieve information dynamically to do so.
-
InfluxDB
Power Real-Time Data Analytics at Scale. Get real-time insights from all types of time series data with InfluxDB. Ingest, query, and analyze billions of data points in real-time with unbounded cardinality.
-
petals
🌸 Run LLMs at home, BitTorrent-style. Fine-tuning and inference up to 10x faster than offloading
-
FastChat
An open platform for training, serving, and evaluating large language models. Release repo for Vicuna and Chatbot Arena.
-
character-editor
Create, edit and convert AI character files for CharacterAI, Pygmalion, Text Generation, KoboldAI and TavernAI
-
serge
A web interface for chatting with Alpaca through llama.cpp. Fully dockerized, with an easy to use API.
-
SaaSHub
SaaSHub - Software Alternatives and Reviews. SaaSHub helps you find the best software and product alternatives
stanford_alpaca reviews and mentions
-
How Open is Generative AI? Part 2
Alpaca is an instruction-oriented LLM derived from LLaMA, enhanced by Stanford researchers with a dataset of 52,000 examples of following instructions, sourced from OpenAI’s InstructGPT through the self-instruct method. The extensive self-instruct dataset, details of data generation, and the model refinement code were publicly disclosed. This model complies with the licensing requirements of its base model. Due to the utilization of InstructGPT for data generation, it also adheres to OpenAI’s usage terms, which prohibit the creation of models competing with OpenAI. This illustrates how dataset restrictions can indirectly affect the resulting fine-tuned model.
- Ask HN: AI/ML papers to catch up with current state of AI?
- OpenAI board in discussions with Sam Altman to return as CEO
- Are there any AI like ChatGPT without content restrictions?
-
Fine-tuning LLMs with LoRA: A Gentle Introduction
In this article, we're going to experiment with LoRA and fine-tune Llama Alpaca using commercial hardware.
-
Creating a new Finetuned model
Most papers I did read showed at least a thousand, even 10000 at several cases, so I assumed that to be the trend in the case of Low rank adapter(PEFT) training.(source: [2305.14314] QLoRA: Efficient Finetuning of Quantized LLMs (arxiv.org) , Stanford CRFM (Alpaca) and the minimum being openchat/openchat · Hugging Face ; There are a lot more examples)
-
Shock tick up for wage growth to 7.3% in blow for Bank of England
I'm not talking about OpenAI ChatGPT I'm talking about things ALPACA, and where did they train these models? Off the existing models for a fraction of a fraction of a fraction of the cost: https://crfm.stanford.edu/2023/03/13/alpaca.html
- Bye bye Bing
-
The idea maze for AI startups (2015)
I think there's a new approach for “How do you get the data?” that wasn't available when this article was written in 2015. The new text and image generative models can now be used to synthesize training datasets.
I was working on an typing autocorrect project and needed a corpus of "text messages". Most of the traditional NLP corpuses like those available through NLTK [0] aren't suitable. But it was easy to script ChatGPT to generate thousands of believable text messages by throwing random topics at it.
Similarly, you can synthesize a training dataset by giving GPT the outputs/labels and asking it to generate a variety of inputs. For sentiment analysis... "Give me 1000 negative movie reviews" and "Now give me 1000 positive movie reviews".
The Alpaca folks used GPT-3 to generate high-quality instruction-following datasets [1] based on a small set of human samples.
Etc.
-
Repos and tutorials for a full finetune (not LoRA)
AFAIK, the original alpaca repo was a full finetune. https://github.com/tatsu-lab/stanford_alpaca
-
A note from our sponsor - SaaSHub
www.saashub.com | 19 Apr 2024
Stats
tatsu-lab/stanford_alpaca is an open source project licensed under Apache License 2.0 which is an OSI approved license.
The primary programming language of stanford_alpaca is Python.
Popular Comparisons
- stanford_alpaca VS alpaca-lora
- stanford_alpaca VS ChatGLM-6B
- stanford_alpaca VS Open-Assistant
- stanford_alpaca VS llama.cpp
- stanford_alpaca VS GPTQ-for-LLaMa
- stanford_alpaca VS Alpaca-Turbo
- stanford_alpaca VS Auto-GPT
- stanford_alpaca VS text-generation-webui
- stanford_alpaca VS FlexGen
- stanford_alpaca VS StableLM