mergekit
task_vectors
mergekit | task_vectors | |
---|---|---|
6 | 2 | |
3,521 | 352 | |
18.7% | 5.4% | |
9.2 | 0.7 | |
5 days ago | 4 months ago | |
Python | Python | |
GNU Lesser General Public License v3.0 only | - |
Stars - the number of stars that a project has on GitHub. Growth - month over month growth in stars.
Activity is a relative number indicating how actively a project is being developed. Recent commits have higher weight than older ones.
For example, an activity of 9.0 indicates that a project is amongst the top 10% of the most actively developed projects that we are tracking.
mergekit
-
Language Models Are Super Mario: Absorbing Abilities from Homologous Models
For others like me who’d not heard of merging before, this seems to be one tool[0] (there may be others)
[0] https://github.com/arcee-ai/mergekit
- FLaNK AI Weekly 25 March 2025
- Tools for merging pretrained large language models
-
Blending Is All You Need: Cheaper, Better Alternative to Trillion-Parameters LLM
mergekit is the tool you need to do this
https://github.com/cg123/mergekit
-
Iambe-RP-20b: An uncensored L2 Frankenstein model directly trained with RP-oriented cDPO
I actually asked the creator of mergekit a question here. In his response, I learned how to use task_arithmetic to isolate the deltas. One could, in theory, use WANDA on that model from the second example, then merge it back into another model. However, that's firmly past the frontier of what has been tried, so experimentation might be messy.
-
LLMs cannot find reasoning errors, but can correct them
Ah, actually reviewing that more closely I found a link to it in the acknowledgements.
https://github.com/cg123/mergekit
task_vectors
-
Iambe-RP-20b: An uncensored L2 Frankenstein model directly trained with RP-oriented cDPO
Would you be willing to elaborate on this paragraph? I found the GitHub pages for SparseGPT and WANDA and I'll read up on those methods (they're both new to me). I've seen a reference to task_arithmetic before in the code that the DARE authors produced for model merging, but it's also a new concept for me. I found this paper and this associated GitHub project. Do you recommend other reading or tools for task_arithmetic? Do you think DARE + TIES merging obviates the utility of sparsifying a model prior to merging the way you described?
-
A New Paradigm For Editing Machine Learning Models Based on Arithmetic Operations Over Task Vectors
Github: https://github.com/mlfoundations/task_vectors
What are some alternatives?
Finetune_LLMs - Repo for fine-tuning Casual LLMs
xTuring - Build, customize and control you own LLMs. From data pre-processing to fine-tuning, xTuring provides an easy way to personalize open-source LLMs. Join our discord community: https://discord.gg/TgHXuSJEk6
LLMLingua - To speed up LLMs' inference and enhance LLM's perceive of key information, compress the prompt and KV-Cache, which achieves up to 20x compression with minimal performance loss.
difftastic - a structural diff that understands syntax 🟥🟩
makeMoE - From scratch implementation of a sparse mixture of experts language model inspired by Andrej Karpathy's makemore :)
LaVague - Large Action Model framework to turn natural language into browser actions
examples - This repository will contain examples of use cases that utilize Decodable streaming solution
Chinese-LLaMA-Alpaca - 中文LLaMA&Alpaca大语言模型+本地CPU/GPU训练部署 (Chinese LLaMA & Alpaca LLMs)
spring-ai - An Application Framework for AI Engineering
pretzelai - Open-source, browser-local data exploration using DuckDB-Wasm and PRQL