tree-of-thoughts
qlora
tree-of-thoughts | qlora | |
---|---|---|
26 | 80 | |
4,042 | 9,432 | |
- | - | |
8.8 | 7.4 | |
2 months ago | 7 months ago | |
Python | Jupyter Notebook | |
Apache License 2.0 | MIT License |
Stars - the number of stars that a project has on GitHub. Growth - month over month growth in stars.
Activity is a relative number indicating how actively a project is being developed. Recent commits have higher weight than older ones.
For example, an activity of 9.0 indicates that a project is amongst the top 10% of the most actively developed projects that we are tracking.
tree-of-thoughts
-
[D] Potential scammer on github stealing work of other ML researchers?
I checked the issues and found https://github.com/kyegomez/tree-of-thoughts/issues/78
-
(2/2) May 2023
Plug in and Play Implementation of Tree of Thoughts: Deliberate Problem Solving with Large Language Models that Elevates Model Reasoning by atleast 70% (https://github.com/kyegomez/tree-of-thoughts)
-
Statement on AI Extinction - Signed by AGI Labs, Top Academics, and Many Other Notable Figures
same deal with amplification research like Tree of Thoughts, AdaPlanner, and Ghost in the Minecraft. same deal with agentized LLMs like Auto-GPT emphasizing testing regimens. they want efficiency and explainability, not this "mine is bigger than yours" nonsense coming out of Microsoft, Google, or Meta (which isn't even the entire picture of the opensource ML research within those firms either). There's this idealized "neurosymbolic AI" where everyone just wants code to do a job, so there should only be so much probabilistic behavior to learn the jobs that aren't learned to begin with, but the fact remains that the actual researchers and engineers want something that is as deterministic as imperative language can be. perhaps we'll achieve functional depth, and instead of some outdated "paperclip maximizer", we summon Maxwell's demon via a "complete" Church-Turing thesis. in other words, while a "vastly superior being in intelligence" is a really bad time for anyone that has an intellect-based superiority complex, the rest of us are humble enough to utilize this information science to further explore the unknown.
- Tree of Thought (ToT) and AutoGPT
-
Tree of Thoughts
This is Shunyu, author of Tree oF Thoughts (arxiv.org/abs/2305.10601).
The official code to replicate paper results is https://github.com/ysymyth/tree-of-thought-llm
Not https://github.com/kyegomez/tree-of-thoughts which according to many who told me, is not right/good implementation of ToT, and damages the reputation of ToT
I explained the situation here: https://twitter.com/ShunyuYao12/status/1663946702754021383
I'd appreciate your help by unstaring his and staring mine, as currently Github and Google searches go to his repo by default, and it has been very misleading for many users.
-
Has anybody tried their models with "Tree of Thoughts"?
I hacked a dirty PR into this derivative repo, to run it with oobabooga API: https://github.com/kyegomez/tree-of-thoughts/pull/8
- Tree of Thoughts: Deliberate Problem Solving with LLMs
qlora
- FLaNK Stack Weekly for 30 Oct 2023
-
I released Marx 3B V3.
Marx 3B V3 is StableLM 3B 4E1T instruction tuned on EverythingLM Data V3(ShareGPT Format) for 2 epochs using QLoRA.
-
Tuning and Testing Llama 2, Flan-T5, and GPT-J with LoRA, Sematic, and Gradio
https://github.com/artidoro/qlora
The tools and mechanisms to get a model to do what you want is ever so changing, ever so quickly. Build and understand a notebook yourself, and reduce dependencies. You will need to switch them.
-
Yet another QLoRA tutorial
My own project right now is still in raw generated form, and this now makes me think about trying qlora's scripts since this gives me some confidence I should be able to get it to turn out now that someone else has carved a path and charted the map. I was going to target llamatune which was mentioned here the other day.
-
Creating a new Finetuned model
Most papers I did read showed at least a thousand, even 10000 at several cases, so I assumed that to be the trend in the case of Low rank adapter(PEFT) training.(source: [2305.14314] QLoRA: Efficient Finetuning of Quantized LLMs (arxiv.org) , Stanford CRFM (Alpaca) and the minimum being openchat/openchat · Hugging Face ; There are a lot more examples)
-
[R] LaVIN-lite: Training your own Multimodal Large Language Models on one single GPU with competitive performance! (Technical Details)
4-bit quantization training mainly refers to qlora. Simply put, qlora quantizes the weights of the LLM into 4-bit for storage, while dequantizing them into 16-bit during the training process to ensure training precision. This method significantly reduces GPU memory overhead during training (the training speed should not vary much). This approach is highly suitable to be combined with parameter-efficient methods. However, the original paper was designed for single-modal LLMs and the code has already been wrapped in HuggingFace's library. Therefore, we extracted the core code from HuggingFace's library and migrated it into LaVIN's code. The main principle is to replace all linear layers in LLM with 4-bit quantized layers. Those interested can refer to our implementation in quantization.py and mm_adaptation.py, which is roughly a dozen lines of code.
-
[D] To all the machine learning engineers: most difficult model task/type you’ve ever had to work with?
There have been some new development like QLora which help fine-tune LLMs without updating all the weights.
-
Finetune MPT-30B using QLORA
This might be helpful: https://github.com/artidoro/qlora/issues/10
-
is lora fine-tuning on 13B/33B/65B comparable to full fine-tuning?
curious, since qlora paper only reports lora/qlora comparison for full fine-tuning for small 7B models.for 13B/33B/65B, it does not do so (table 4 in paper)it would be helpful if anyone can please provide links where I can read more on efficacy of lora or disadvantages of lora?
-
Need a detailed tutorial on how to create and use a dataset for QLoRA fine-tuning.
This might not be appropriate answer but did you take a look at this repository? https://github.com/artidoro/qlora With artidoro's repository it's pretty easy to train qlora. You just prepare your own dataset and run the following command: python qlora.py --model_name_or_path --dataset="path/to/your/dataset" --dataset_format="self-instruct" This is only available for several dataset formats. But every dataset format has to have input-output pairs. So the dataset json format has to be like this [ { “input”: “something ”, “output”:“something ” }, { “input”: “something ”, “output”:“something ” } ]
What are some alternatives?
Awesome-Prompt-Engineering - This repository contains a hand-curated resources for Prompt Engineering with a focus on Generative Pre-trained Transformer (GPT), ChatGPT, PaLM etc
alpaca-lora - Instruct-tune LLaMA on consumer hardware
tree-of-thought-llm - [NeurIPS 2023] Tree of Thoughts: Deliberate Problem Solving with Large Language Models
GPTQ-for-LLaMa - 4 bits quantization of LLaMA using GPTQ
GirlfriendGPT - Girlfriend GPT is a Python project to build your own AI girlfriend using ChatGPT4.0
bitsandbytes - Accessible large language models via k-bit quantization for PyTorch.
prompt-engineering - Tips and tricks for working with Large Language Models like OpenAI's GPT-4.
ggml - Tensor library for machine learning
Mr.-Ranedeer-AI-Tutor - A GPT-4 AI Tutor Prompt for customizable personalized learning experiences.
alpaca_lora_4bit
Neurite - Fractal Graph Desktop for Ai-Agents, Web-Browsing, Note-Taking, and Code.
llm-foundry - LLM training code for Databricks foundation models