mergekit
Chinese-LLaMA-Alpaca
mergekit | Chinese-LLaMA-Alpaca | |
---|---|---|
6 | 4 | |
3,521 | 17,466 | |
18.7% | - | |
9.2 | 8.3 | |
6 days ago | 10 days ago | |
Python | Python | |
GNU Lesser General Public License v3.0 only | Apache License 2.0 |
Stars - the number of stars that a project has on GitHub. Growth - month over month growth in stars.
Activity is a relative number indicating how actively a project is being developed. Recent commits have higher weight than older ones.
For example, an activity of 9.0 indicates that a project is amongst the top 10% of the most actively developed projects that we are tracking.
mergekit
-
Language Models Are Super Mario: Absorbing Abilities from Homologous Models
For others like me who’d not heard of merging before, this seems to be one tool[0] (there may be others)
[0] https://github.com/arcee-ai/mergekit
- FLaNK AI Weekly 25 March 2025
- Tools for merging pretrained large language models
-
Blending Is All You Need: Cheaper, Better Alternative to Trillion-Parameters LLM
mergekit is the tool you need to do this
https://github.com/cg123/mergekit
-
Iambe-RP-20b: An uncensored L2 Frankenstein model directly trained with RP-oriented cDPO
I actually asked the creator of mergekit a question here. In his response, I learned how to use task_arithmetic to isolate the deltas. One could, in theory, use WANDA on that model from the second example, then merge it back into another model. However, that's firmly past the frontier of what has been tried, so experimentation might be messy.
-
LLMs cannot find reasoning errors, but can correct them
Ah, actually reviewing that more closely I found a link to it in the acknowledgements.
https://github.com/cg123/mergekit
Chinese-LLaMA-Alpaca
-
Chinese-Alpaca-Plus-13B-GPTQ
I'd like to share with you today the Chinese-Alpaca-Plus-13B-GPTQ model, which is the GPTQ format quantised 4bit models of Yiming Cui's Chinese-LLaMA-Alpaca 13B for GPU reference.
-
How to train a new language that is not in base model?
Could follow what people did with the Chinese-LLaMA, just for Korean. Might want to have a pure Korean corpus before feeding in a translation dataset. How big is it by the way?
- Open Source Chinese LLMs
-
Its possible to fine tune the llama model to better understand another language?
Chinese: https://github.com/ymcui/Chinese-LLaMA-Alpaca
What are some alternatives?
Finetune_LLMs - Repo for fine-tuning Casual LLMs
ChatGLM2-6B - ChatGLM2-6B: An Open Bilingual Chat LLM | 开源双语对话语言模型
xTuring - Build, customize and control you own LLMs. From data pre-processing to fine-tuning, xTuring provides an easy way to personalize open-source LLMs. Join our discord community: https://discord.gg/TgHXuSJEk6
CodeCapybara - Open-source Self-Instruction Tuning Code LLM
LLMLingua - To speed up LLMs' inference and enhance LLM's perceive of key information, compress the prompt and KV-Cache, which achieves up to 20x compression with minimal performance loss.
LLMSurvey - The official GitHub page for the survey paper "A Survey of Large Language Models".
task_vectors - Editing Models with Task Arithmetic
paxml - Pax is a Jax-based machine learning framework for training large scale models. Pax allows for advanced and fully configurable experimentation and parallelization, and has demonstrated industry leading model flop utilization rates.
difftastic - a structural diff that understands syntax 🟥🟩
Qwen-VL - The official repo of Qwen-VL (通义千问-VL) chat & pretrained large vision language model proposed by Alibaba Cloud.
makeMoE - From scratch implementation of a sparse mixture of experts language model inspired by Andrej Karpathy's makemore :)
LLM-Agent-Paper-List - The paper list of the 86-page paper "The Rise and Potential of Large Language Model Based Agents: A Survey" by Zhiheng Xi et al.