GPT-4-LLM
Instruction Tuning with GPT-4 (by Instruction-Tuning-with-GPT-4)
AlpacaDataCleaned
Alpaca dataset from Stanford, cleaned and curated (by gururise)
GPT-4-LLM | AlpacaDataCleaned | |
---|---|---|
5 | 14 | |
3,998 | 1,394 | |
- | - | |
5.4 | 7.6 | |
11 months ago | about 1 year ago | |
HTML | Python | |
Apache License 2.0 | Apache License 2.0 |
The number of mentions indicates the total number of mentions that we've tracked plus the number of user suggested alternatives.
Stars - the number of stars that a project has on GitHub. Growth - month over month growth in stars.
Activity is a relative number indicating how actively a project is being developed. Recent commits have higher weight than older ones.
For example, an activity of 9.0 indicates that a project is amongst the top 10% of the most actively developed projects that we are tracking.
Stars - the number of stars that a project has on GitHub. Growth - month over month growth in stars.
Activity is a relative number indicating how actively a project is being developed. Recent commits have higher weight than older ones.
For example, an activity of 9.0 indicates that a project is amongst the top 10% of the most actively developed projects that we are tracking.
GPT-4-LLM
Posts with mentions or reviews of GPT-4-LLM.
We have used some of these posts to build our list of alternatives
and similar projects. The last one was on 2023-08-22.
-
Fine-tuning LLMs with LoRA: A Gentle Introduction
I'm using the Instruction Tuning with GPT-4 dataset, which is hosted on Huggingface.
- (31F). Lost 1.8% body fat and gained 1.3 lbs muscle mass in 2 weeks!
-
What’s the current best model that will run well locally on a 3090?
No, GPT4 x Alpaca, GPT4 Alpaca, and GPT4All use different datasets. GPT4 x Alpaca uses GPTeacher, GPT4 Alpaca uses Microsoft Research's GPT-4-LLM, and GPT4All uses their own. GPT4All is commonly considered to be the worst out of all of them in the general community.
-
GPT4-X-Alpaca 30B 4-bit, by MetaIX based on LoRA by chansung
For anyone wondering how this compares with the 13B GPT4 x Alpaca, the dataset used is different. The 13B GPT4xAlpaca uses the GPTeacher dataset, while this uses the Microsoft Research dataset from Instruction Tuning with GPT-4. It should be a direct upgrade to Stanford's Alpaca, and I'll add it to the wiki as GPT4 Alpaca without an x to differentiate it.
-
GPT-4 Takes the Lead in Instruction-Tuning of Large Language Models: Advancing Generalization Capabilities for Real-World Tasks
Github: https://github.com/Instruction-Tuning-with-GPT-4/GPT-4-LLM
AlpacaDataCleaned
Posts with mentions or reviews of AlpacaDataCleaned.
We have used some of these posts to build our list of alternatives
and similar projects. The last one was on 2023-05-08.
-
While training LoRA I get 'Failed to read file... JSON parse error'
I tried using the default alpaca_data_cleaned.json training dataset as mentioned here: https://github.com/gururise/AlpacaDataCleaned/blob/main/alpaca_data_cleaned.json. Does anyone know why I could be getting this error? The file must be in correct format since it is the default file they have shown in their example.
-
Why run LLMs locally?
This cleaned alpaca dataset gives a good idea of how data is formatted for the standard alpaca json format. Personally, I'd handle making your own datasets by using gpt4 to format the data into a dataset. You can do it by hand or use a llama model, but I've personally just found using chatgpt to be the most efficient way to get the highest possible output. I'm trying to go for quality over quantity.
-
New llama LoRA trained on WizardLM dataset
I created a dataset merge based on the following very high quality datasets:
- [P] Finetuning a commercially viable open source LLM (Flan-UL2) using Alpaca, Dolly15K and LoRA
-
Stability AI Launches the First of Its StableLM Suite of Language Models
That dataset is licensed under CC BY NC 4.0, which is not open. It also has a bunch of garbage in it; see https://github.com/gururise/AlpacaDataCleaned
- Alpacino-13B
-
GPT4-X-Alpaca 30B 4-bit, by MetaIX based on LoRA by chansung
The alpaca cleaned dataset has integrated the Microsoft GPT-4 dataset and cleaned many of the issues.
-
Alpaca, LLaMa, Vicuna [D]
13b Alpaca Cleaned (trained on the cleaned dataset) is very impressive and works well as an instruct model w/o any censorship.
-
Is there a good place to post datasets for the community?
There's already a community maintained Alpaca with cleaned data. https://github.com/gururise/AlpacaDataCleaned And a huge amount of work has already been done.
-
Dirty data sets and LLaMA/ALPACA...
this might be what you're looking for: https://github.com/gururise/AlpacaDataCleaned
What are some alternatives?
When comparing GPT-4-LLM and AlpacaDataCleaned you can also consider the following projects:
character-editor - Create, edit and convert AI character files for CharacterAI, Pygmalion, Text Generation, KoboldAI and TavernAI
StableLM - StableLM: Stability AI Language Models