stanford_alpaca
ChatGLM-6B
Our great sponsors
stanford_alpaca | ChatGLM-6B | |
---|---|---|
108 | 17 | |
28,761 | 39,231 | |
1.3% | 3.1% | |
2.0 | 8.4 | |
about 2 months ago | 2 months ago | |
Python | Python | |
Apache License 2.0 | Apache License 2.0 |
Stars - the number of stars that a project has on GitHub. Growth - month over month growth in stars.
Activity is a relative number indicating how actively a project is being developed. Recent commits have higher weight than older ones.
For example, an activity of 9.0 indicates that a project is amongst the top 10% of the most actively developed projects that we are tracking.
stanford_alpaca
-
How Open is Generative AI? Part 2
Alpaca is an instruction-oriented LLM derived from LLaMA, enhanced by Stanford researchers with a dataset of 52,000 examples of following instructions, sourced from OpenAI’s InstructGPT through the self-instruct method. The extensive self-instruct dataset, details of data generation, and the model refinement code were publicly disclosed. This model complies with the licensing requirements of its base model. Due to the utilization of InstructGPT for data generation, it also adheres to OpenAI’s usage terms, which prohibit the creation of models competing with OpenAI. This illustrates how dataset restrictions can indirectly affect the resulting fine-tuned model.
- Ask HN: AI/ML papers to catch up with current state of AI?
- OpenAI board in discussions with Sam Altman to return as CEO
- Are there any AI like ChatGPT without content restrictions?
-
Fine-tuning LLMs with LoRA: A Gentle Introduction
In this article, we're going to experiment with LoRA and fine-tune Llama Alpaca using commercial hardware.
-
Creating a new Finetuned model
Most papers I did read showed at least a thousand, even 10000 at several cases, so I assumed that to be the trend in the case of Low rank adapter(PEFT) training.(source: [2305.14314] QLoRA: Efficient Finetuning of Quantized LLMs (arxiv.org) , Stanford CRFM (Alpaca) and the minimum being openchat/openchat · Hugging Face ; There are a lot more examples)
-
Shock tick up for wage growth to 7.3% in blow for Bank of England
I'm not talking about OpenAI ChatGPT I'm talking about things ALPACA, and where did they train these models? Off the existing models for a fraction of a fraction of a fraction of the cost: https://crfm.stanford.edu/2023/03/13/alpaca.html
- Bye bye Bing
-
The idea maze for AI startups (2015)
I think there's a new approach for “How do you get the data?” that wasn't available when this article was written in 2015. The new text and image generative models can now be used to synthesize training datasets.
I was working on an typing autocorrect project and needed a corpus of "text messages". Most of the traditional NLP corpuses like those available through NLTK [0] aren't suitable. But it was easy to script ChatGPT to generate thousands of believable text messages by throwing random topics at it.
Similarly, you can synthesize a training dataset by giving GPT the outputs/labels and asking it to generate a variety of inputs. For sentiment analysis... "Give me 1000 negative movie reviews" and "Now give me 1000 positive movie reviews".
The Alpaca folks used GPT-3 to generate high-quality instruction-following datasets [1] based on a small set of human samples.
Etc.
[0] https://www.nltk.org/nltk_data/
[1] https://crfm.stanford.edu/2023/03/13/alpaca.html
-
Repos and tutorials for a full finetune (not LoRA)
AFAIK, the original alpaca repo was a full finetune. https://github.com/tatsu-lab/stanford_alpaca
ChatGLM-6B
-
What are the current fastest multi-gpu inference frameworks?
ChatGLM seems to be pretty popular but I've never used this before.
-
A CEO is spending more than $2,000 a month on ChatGPT Plus accounts for all of his employees, and he says it's saving 'hours' of time
There are also locally hosted options that approach the effectiveness of ChatGPT. This GLM for example was specifically trained to be able to be processed on a single consumer grade GPU
- Open Source Chinese LLMs
- ChatGLM-6B: run locally on consumer graphics card (6GB of GPU memory required)
- Ask HN: Open source LLM for commercial use?
-
Coding LLaMa Modell?
A link to for y'all. Definitely gonna try to mess around with this!
- 关于GPT,AI和未来的一些社会经济问题,向诸位请教
- FLiPN-FLaNK Stack Weekly for 20 March 2023
- ChatGLM-6B - an open source 6.2 billion parameter English/Chinese bilingual LLM trained on 1T tokens, supplemented by supervised fine-tuning, feedback bootstrap, and Reinforcement Learning from Human Feedback. Runs on consumer grade GPUs
- ChatGLM: Open bilingual language model based on General Language Model framework
What are some alternatives?
alpaca-lora - Instruct-tune LLaMA on consumer hardware
llama.cpp - LLM inference in C/C++
Open-Assistant - OpenAssistant is a chat-based assistant that understands tasks, can interact with third-party systems, and retrieve information dynamically to do so.
alpaca.cpp - Locally run an Instruction-Tuned Chat-Style LLM
GPTQ-for-LLaMa - 4 bits quantization of LLaMA using GPTQ
datagen - Generate authentic looking mock data based on a SQL, JSON or Avro schema and produce to Kafka in JSON or Avro format.
Alpaca-Turbo - Web UI to run alpaca model locally
basaran - Basaran is an open-source alternative to the OpenAI text completion API. It provides a compatible streaming API for your Hugging Face Transformers-based text generation models.
Auto-GPT - An experimental open-source attempt to make GPT-4 fully autonomous. [Moved to: https://github.com/Significant-Gravitas/Auto-GPT]
accelerate - 🚀 A simple way to launch, train, and use PyTorch models on almost any device and distributed configuration, automatic mixed precision (including fp8), and easy-to-configure FSDP and DeepSpeed support