sparsegpt
AlpacaDataCleaned
sparsegpt | AlpacaDataCleaned | |
---|---|---|
16 | 14 | |
634 | 1,394 | |
5.0% | - | |
2.4 | 7.6 | |
about 1 month ago | about 1 year ago | |
Python | Python | |
Apache License 2.0 | Apache License 2.0 |
Stars - the number of stars that a project has on GitHub. Growth - month over month growth in stars.
Activity is a relative number indicating how actively a project is being developed. Recent commits have higher weight than older ones.
For example, an activity of 9.0 indicates that a project is amongst the top 10% of the most actively developed projects that we are tracking.
sparsegpt
-
(1/2) May 2023
SparseGPT: Massive Language Models Can Be Accurately Pruned in One-Shot (https://arxiv.org/abs/2301.00774)
- Why Falcon going Apache 2.0 is a BIG deal for all of us.
-
New Open-source LLMs! 🤯 The Falcon has landed! 7B and 40B
There is this : https://github.com/IST-DASLab/sparsegpt
-
Webinar: Running LLMs performantly on CPUs Utilizing Pruning and Quantization
Check the paper here, it's intersting: https://arxiv.org/abs/2301.00774
-
OpenAI chief goes before US Congress to propose licenses for building AI
There's no chance that we've peeked from a bang for buck sense - we still haven't adequately investigated sparse networks.
Relevantish: https://arxiv.org/abs/2301.00774
The fact that we can reach those levels of sparseness with pruning also indicates that we're not doing a very good job of generating the initial network conditions.
Being able to come up with trainable initial settings for sparse networks across different topologies is hard, but given that we've had a degree of success with pre-trained networks, pre-training and pre-pruning might also allow for sparse networks with minimally compromised learning capabilities.
If it's possible to pre-train composable network modules, it might also be feasible to define trainable sparse networks with significantly relaxed topological constraints.
-
How to run Llama 13B with a 6GB graphics card
Training uses gradient descent, so you want to have good precision during that process. But once you have the overall structure of the network, https://arxiv.org/abs/2210.17323 (GPTQ) showed that you can cut down the precision quite a bit without losing a lot of accuracy. It seems you can cut down further for larger models. For the 13B Llama-based ones, going below 5 bit per parameter is noticeably worse, but for 30B models you can do 4 bits.
The same group did another paper https://arxiv.org/abs/2301.00774 which shows that in addition to reducing the precision of each parameter, you can also prune out a bunch of parameters entirely. It's harder to apply this optimization because models are usually loaded into RAM densely, but I hope someone figures out how to do it for popular models.
- SparseGPT: Language Models Can Be Accurately Pruned in One-Shot
AlpacaDataCleaned
-
While training LoRA I get 'Failed to read file... JSON parse error'
I tried using the default alpaca_data_cleaned.json training dataset as mentioned here: https://github.com/gururise/AlpacaDataCleaned/blob/main/alpaca_data_cleaned.json. Does anyone know why I could be getting this error? The file must be in correct format since it is the default file they have shown in their example.
-
Why run LLMs locally?
This cleaned alpaca dataset gives a good idea of how data is formatted for the standard alpaca json format. Personally, I'd handle making your own datasets by using gpt4 to format the data into a dataset. You can do it by hand or use a llama model, but I've personally just found using chatgpt to be the most efficient way to get the highest possible output. I'm trying to go for quality over quantity.
-
New llama LoRA trained on WizardLM dataset
I created a dataset merge based on the following very high quality datasets:
- [P] Finetuning a commercially viable open source LLM (Flan-UL2) using Alpaca, Dolly15K and LoRA
-
Stability AI Launches the First of Its StableLM Suite of Language Models
That dataset is licensed under CC BY NC 4.0, which is not open. It also has a bunch of garbage in it; see https://github.com/gururise/AlpacaDataCleaned
- Alpacino-13B
-
GPT4-X-Alpaca 30B 4-bit, by MetaIX based on LoRA by chansung
The alpaca cleaned dataset has integrated the Microsoft GPT-4 dataset and cleaned many of the issues.
-
Alpaca, LLaMa, Vicuna [D]
13b Alpaca Cleaned (trained on the cleaned dataset) is very impressive and works well as an instruct model w/o any censorship.
-
Is there a good place to post datasets for the community?
There's already a community maintained Alpaca with cleaned data. https://github.com/gururise/AlpacaDataCleaned And a huge amount of work has already been done.
-
Dirty data sets and LLaMA/ALPACA...
this might be what you're looking for: https://github.com/gururise/AlpacaDataCleaned
What are some alternatives?
StableLM - StableLM: Stability AI Language Models
github-copilot-product-specific-terms
safetensors - Simple, safe way to store and distribute tensors
promptfoo - Test your prompts, models, and RAGs. Catch regressions and improve prompt quality. LLM evals for OpenAI, Azure, Anthropic, Gemini, Mistral, Llama, Bedrock, Ollama, and other local & private models with CI/CD integration.
koboldcpp - A simple one-file way to run various GGML and GGUF models with KoboldAI's UI
chat-ui - Open source codebase powering the HuggingChat app
simpleAI - An easy way to host your own AI API and expose alternative models, while being compatible with "open" AI clients.
intel-extension-for-pytorch - A Python package for extending the official PyTorch that can easily obtain performance on Intel platform
GPT-4-LLM - Instruction Tuning with GPT-4
geov - The GeoV model is a large langauge model designed by Georges Harik and uses Rotary Positional Embeddings with Relative distances (RoPER). We have shared a pre-trained 9B parameter model.
txtinstruct - 📚 Datasets and models for instruction-tuning