Finetune_LLMs
mergekit
Finetune_LLMs | mergekit | |
---|---|---|
2 | 6 | |
438 | 3,521 | |
- | 16.5% | |
8.5 | 9.2 | |
about 1 month ago | 3 days ago | |
Python | Python | |
GNU Affero General Public License v3.0 | GNU Lesser General Public License v3.0 only |
Stars - the number of stars that a project has on GitHub. Growth - month over month growth in stars.
Activity is a relative number indicating how actively a project is being developed. Recent commits have higher weight than older ones.
For example, an activity of 9.0 indicates that a project is amongst the top 10% of the most actively developed projects that we are tracking.
Finetune_LLMs
-
Prepare Dataset
Regarding this: if you have resources (at least Colab Pro), you would be much better off training GPT-J (aka GPT-J-6B). Not only it's 4x larger than the largest GPT-2, its architecture, AFAIK, is based on GPT-3. You can use this repo as a good example for GPT-J finetuning.
-
[D] Fine-tuning GPT-J: lessons learned
And this: https://github.com/mallorbc/Finetune_GPTNEO_GPTJ6B
mergekit
-
Language Models Are Super Mario: Absorbing Abilities from Homologous Models
For others like me who’d not heard of merging before, this seems to be one tool[0] (there may be others)
[0] https://github.com/arcee-ai/mergekit
- FLaNK AI Weekly 25 March 2025
- Tools for merging pretrained large language models
-
Blending Is All You Need: Cheaper, Better Alternative to Trillion-Parameters LLM
mergekit is the tool you need to do this
https://github.com/cg123/mergekit
-
Iambe-RP-20b: An uncensored L2 Frankenstein model directly trained with RP-oriented cDPO
I actually asked the creator of mergekit a question here. In his response, I learned how to use task_arithmetic to isolate the deltas. One could, in theory, use WANDA on that model from the second example, then merge it back into another model. However, that's firmly past the frontier of what has been tried, so experimentation might be messy.
-
LLMs cannot find reasoning errors, but can correct them
Ah, actually reviewing that more closely I found a link to it in the acknowledgements.
https://github.com/cg123/mergekit
What are some alternatives?
DeepSpeed - DeepSpeed is a deep learning optimization library that makes distributed training and inference easy, efficient, and effective.
xTuring - Build, customize and control you own LLMs. From data pre-processing to fine-tuning, xTuring provides an easy way to personalize open-source LLMs. Join our discord community: https://discord.gg/TgHXuSJEk6
mesh-transformer-jax - Model parallel transformers in JAX and Haiku
LLMLingua - To speed up LLMs' inference and enhance LLM's perceive of key information, compress the prompt and KV-Cache, which achieves up to 20x compression with minimal performance loss.
code-llama-for-vscode - Use Code Llama with Visual Studio Code and the Continue extension. A local LLM alternative to GitHub Copilot.
task_vectors - Editing Models with Task Arithmetic
AnglE - Angle-optimized Text Embeddings | 🔥 SOTA on STS and MTEB Leaderboard
difftastic - a structural diff that understands syntax 🟥🟩
GoLLIE - Guideline following Large Language Model for Information Extraction
makeMoE - From scratch implementation of a sparse mixture of experts language model inspired by Andrej Karpathy's makemore :)
replicate-llama2-sms-chatbot
LaVague - Large Action Model framework to turn natural language into browser actions