qlora
Voyager
Our great sponsors
qlora | Voyager | |
---|---|---|
80 | 53 | |
9,388 | 5,152 | |
- | 4.4% | |
7.4 | 4.7 | |
7 months ago | 26 days ago | |
Jupyter Notebook | JavaScript | |
MIT License | MIT License |
Stars - the number of stars that a project has on GitHub. Growth - month over month growth in stars.
Activity is a relative number indicating how actively a project is being developed. Recent commits have higher weight than older ones.
For example, an activity of 9.0 indicates that a project is amongst the top 10% of the most actively developed projects that we are tracking.
qlora
- FLaNK Stack Weekly for 30 Oct 2023
-
I released Marx 3B V3.
Marx 3B V3 is StableLM 3B 4E1T instruction tuned on EverythingLM Data V3(ShareGPT Format) for 2 epochs using QLoRA.
-
Tuning and Testing Llama 2, Flan-T5, and GPT-J with LoRA, Sematic, and Gradio
https://github.com/artidoro/qlora
The tools and mechanisms to get a model to do what you want is ever so changing, ever so quickly. Build and understand a notebook yourself, and reduce dependencies. You will need to switch them.
-
Yet another QLoRA tutorial
My own project right now is still in raw generated form, and this now makes me think about trying qlora's scripts since this gives me some confidence I should be able to get it to turn out now that someone else has carved a path and charted the map. I was going to target llamatune which was mentioned here the other day.
-
Creating a new Finetuned model
Most papers I did read showed at least a thousand, even 10000 at several cases, so I assumed that to be the trend in the case of Low rank adapter(PEFT) training.(source: [2305.14314] QLoRA: Efficient Finetuning of Quantized LLMs (arxiv.org) , Stanford CRFM (Alpaca) and the minimum being openchat/openchat · Hugging Face ; There are a lot more examples)
-
[R] LaVIN-lite: Training your own Multimodal Large Language Models on one single GPU with competitive performance! (Technical Details)
4-bit quantization training mainly refers to qlora. Simply put, qlora quantizes the weights of the LLM into 4-bit for storage, while dequantizing them into 16-bit during the training process to ensure training precision. This method significantly reduces GPU memory overhead during training (the training speed should not vary much). This approach is highly suitable to be combined with parameter-efficient methods. However, the original paper was designed for single-modal LLMs and the code has already been wrapped in HuggingFace's library. Therefore, we extracted the core code from HuggingFace's library and migrated it into LaVIN's code. The main principle is to replace all linear layers in LLM with 4-bit quantized layers. Those interested can refer to our implementation in quantization.py and mm_adaptation.py, which is roughly a dozen lines of code.
-
[D] To all the machine learning engineers: most difficult model task/type you’ve ever had to work with?
There have been some new development like QLora which help fine-tune LLMs without updating all the weights.
-
Finetune MPT-30B using QLORA
This might be helpful: https://github.com/artidoro/qlora/issues/10
-
is lora fine-tuning on 13B/33B/65B comparable to full fine-tuning?
curious, since qlora paper only reports lora/qlora comparison for full fine-tuning for small 7B models.for 13B/33B/65B, it does not do so (table 4 in paper)it would be helpful if anyone can please provide links where I can read more on efficacy of lora or disadvantages of lora?
-
Need a detailed tutorial on how to create and use a dataset for QLoRA fine-tuning.
This might not be appropriate answer but did you take a look at this repository? https://github.com/artidoro/qlora With artidoro's repository it's pretty easy to train qlora. You just prepare your own dataset and run the following command: python qlora.py --model_name_or_path --dataset="path/to/your/dataset" --dataset_format="self-instruct" This is only available for several dataset formats. But every dataset format has to have input-output pairs. So the dataset json format has to be like this [ { “input”: “something ”, “output”:“something ” }, { “input”: “something ”, “output”:“something ” } ]
Voyager
-
Google Launches Gemini, Its "Most Powerful" AI Model to Date
Source: Conversation with Bing, 12/10/2023 (1) Wes Roth - YouTube. https://www.youtube.com/@WesRoth. (2) I've set most of my videos to Public again - Community. https://community.openai.com/t/ive-set-most-of-my-videos-to-public-again/24535. (3) AI Updates: Meta Develops Mind-Reading AI System, OpenAI’s Q* Is Here .... https://www.windermeresun.com/2023/11/20/ai-updates-meta-develops-mind-reading-ai-system-openais-q-is-here-how-economy-will-work-after-agi/. (4) David Shapiro. https://www.daveshap.io/. (5) undefined. https://natural20.com/. (6) undefined. https://arxiv.org/abs/2305.16291. (7) undefined. https://twitter.com/DrJimFan/status/1. (8) undefined. https://voyager.minedojo.org/. (9) undefined. https://minedojo.org/. (10) undefined. https://www.youtube.com/@DavidShapiroAutomator/videos.
- Is there any game that allow us to interact with it by python?
-
A Coder Considers the Waning Days of the Craft
> AI cannot sustain itself trained on AI work.
This isn’t true. You can train LLMs entirely on synthetic data and get strong results. [0]
> If new languages, engines etc pop up it cannot synthesize new forms of coding without that code having existed in the first place.
You can describe the semantics to a LLM, have it generate code, tell it what went wrong (i.e. with compiler feedback), and then train on that. For an example of this workflow in a different context, see [1].
> And most importantly, it cannot fundamentally rationalize about what code does or how it functions.
Most competent LLMs can trivially describe what some code does and speculate on the reasoning behind it.
I don’t disagree that they’re flawed and imperfect, but I also do not think this is an unassailable state of affairs. They’re only going to get better from here.
[0]: https://arxiv.org/abs/2309.05463
[1]: https://voyager.minedojo.org/
-
AutoGen: Enable Next-Gen Large Language Model Applications
In a way it is the same thing, agents are mostly an abstraction that make it easier to know what’s going on.
I think of agents more or less as python classes with a mixture of natural language and code functions. You design them to do something with information they produce, and to interface with other agents or “tools” in some way.
But all the agents can be the same language model under the hood, they are frames used to build different kinds of contexts.
And yes I think the idea is that emergent behaviour can be useful. This comes to mind
https://github.com/MineDojo/Voyager
But I think we are still a small ways off from being really smart about agents. My opinion is that we haven’t quite figured out what we are doing yet.
-
Open/Local LLM support for MineDojo/Voyager
This k8s application deploys an instance of Voyager along with a Fabric Minecraft server with required fabric mods. It assumes you have a local deployment of a Large Language Model (LLM) with 4K-8K token context length with a compatible OpenAI API, including embeddings support.
- Voyager – Minecraft Embodied Agent with Large Language Models
-
List of Awesome AI Agents like AutoGPT and BabyAGI / Many open-source Agents with code included!
In my opinion the most interesting Agents: Auto-GPT Github: https://github.com/Significant-Gravitas/Auto-GPT BabyAGI Github: https://github.com/yoheinakajima/babyagi Voyager Github: https://github.com/MineDojo/Voyager / Paper: https://arxiv.org/abs/2305.16291 I would also add: ChemCrow: Augmenting large-language models with chemistry tools Github: https://github.com/ur-whitelab/chemcrow-public/ Paper: https://arxiv.org/abs/2304.05376
-
[D] - Are there any AI benchmarks that involve successful longterm problem solving when running as autonomous agents (like in autogpt)? How do we compare the effectiveness of models as agents?
Does this beat the voyager? I read about it and wondered what if we add a skill library to langchain/llamaindex agents. It could be the same vector store for storing static data but after each task is performed, the agent will evaluate and archive the recipe of steps to perform a new task. Next time when the agent is asked to perform a task, it can just look at the library to retrieve a recipe. Unlike traditional fine tuning, you dont update the model parameters, these recipes are much more interpretable and can be manually edited/inserted by humans. There may also be an automatic way to convert wikihow articles or youtube tutorials into recipes.
-
GPT-4 was set free in Minecraft, here's what happened next...
Source. P.S. If you love geeking over AI updates, I have this free newsletter you might want to check out. Thank you!
Source.
What are some alternatives?
alpaca-lora - Instruct-tune LLaMA on consumer hardware
GITM - Ghost in the Minecraft: Generally Capable Agents for Open-World Environments via Large Language Models with Text-based Knowledge and Memory
GPTQ-for-LLaMa - 4 bits quantization of LLaMA using GPTQ
tree-of-thought-llm - [NeurIPS 2023] Tree of Thoughts: Deliberate Problem Solving with Large Language Models
bitsandbytes - Accessible large language models via k-bit quantization for PyTorch.
llm-awq - AWQ: Activation-aware Weight Quantization for LLM Compression and Acceleration
ggml - Tensor library for machine learning
mineflayer - Create Minecraft bots with a powerful, stable, and high level JavaScript API.
alpaca_lora_4bit
gorilla - Gorilla: An API store for LLMs
llm-foundry - LLM training code for Databricks foundation models