mergekit
xTuring
mergekit | xTuring | |
---|---|---|
6 | 31 | |
3,521 | 2,524 | |
18.7% | 0.9% | |
9.2 | 8.4 | |
6 days ago | about 1 month ago | |
Python | Python | |
GNU Lesser General Public License v3.0 only | Apache License 2.0 |
Stars - the number of stars that a project has on GitHub. Growth - month over month growth in stars.
Activity is a relative number indicating how actively a project is being developed. Recent commits have higher weight than older ones.
For example, an activity of 9.0 indicates that a project is amongst the top 10% of the most actively developed projects that we are tracking.
mergekit
-
Language Models Are Super Mario: Absorbing Abilities from Homologous Models
For others like me who’d not heard of merging before, this seems to be one tool[0] (there may be others)
[0] https://github.com/arcee-ai/mergekit
- FLaNK AI Weekly 25 March 2025
- Tools for merging pretrained large language models
-
Blending Is All You Need: Cheaper, Better Alternative to Trillion-Parameters LLM
mergekit is the tool you need to do this
https://github.com/cg123/mergekit
-
Iambe-RP-20b: An uncensored L2 Frankenstein model directly trained with RP-oriented cDPO
I actually asked the creator of mergekit a question here. In his response, I learned how to use task_arithmetic to isolate the deltas. One could, in theory, use WANDA on that model from the second example, then merge it back into another model. However, that's firmly past the frontier of what has been tried, so experimentation might be messy.
-
LLMs cannot find reasoning errors, but can correct them
Ah, actually reviewing that more closely I found a link to it in the acknowledgements.
https://github.com/cg123/mergekit
xTuring
-
I'm developing an open-source AI tool called xTuring, enabling anyone to construct a Language Model with just 5 lines of code. I'd love to hear your thoughts!
Explore the project on GitHub here.
-
LLaMA 2 fine-tuning made easier and faster
If you're curious, I encourage you to: - Dive deeper with the LLaMA 2 tutorial here. - Explore the project on GitHub here. - Connect with our community on Discord here.
-
RAG vs. Fine-Tuning
If you want best performance, you need to do both RAG and fine-tuning very well. There are plenty of resources on doing fine-tuning thought. I'm one of the contributors to https://github.com/stochasticai/xturing project focused on fine-tuning LLMs. You can find help in the discord channel listed on the GitHub.
- Build, customize and control your own personal LLMs via xTuring OSS
-
Finetuning LLaMA 2 (the base models) ?
What tools do you use and achieved great results ? … For me i have tried xturing and SFTTrainer and they got me a semi okay results.
-
Finetuning using Google Colab (Free Tier)
Code: https://github.com/stochasticai/xTuring/blob/main/examples/llama/llama_lora_int8.py Colab: https://colab.research.google.com/drive/1SQUXq1AMZPSLD4mk3A3swUIc6Y2dclme?usp=sharing
-
I would like to try my hand at finetuning some models. What is the best way to start? I have some questions that I'd appreciate your help on.
We are a group of researchers out of Harvard working on open-source library called xTuring, focused on fine-tuning LLMs: https://github.com/stochasticai/xturing.
-
Fine tuning on my tweets
Fine tuning I was thinking about using this (low GPU memory footprint): https://github.com/stochasticai/xturing/blob/main/examples/int4_finetuning/README.md
-
Colab for finetuning llama models in 4-bit?
I can't speak for QLORA, as I haven't had a chance to get an implementation working, but I've had success with StochasticAI's Xturing. It's by far the most streamlined method of finetuning I've come across, and they offer int8 and int4 fintuning (but only for llama-7B).
- Just wanna say this.
What are some alternatives?
Finetune_LLMs - Repo for fine-tuning Casual LLMs
quivr - Your GenAI Second Brain 🧠 A personal productivity assistant (RAG) ⚡️🤖 Chat with your docs (PDF, CSV, ...) & apps using Langchain, GPT 3.5 / 4 turbo, Private, Anthropic, VertexAI, Ollama, LLMs, Groq that you can share with users ! Local & Private alternative to OpenAI GPTs & ChatGPT powered by retrieval-augmented generation.
LLMLingua - To speed up LLMs' inference and enhance LLM's perceive of key information, compress the prompt and KV-Cache, which achieves up to 20x compression with minimal performance loss.
axolotl - Go ahead and axolotl questions
task_vectors - Editing Models with Task Arithmetic
FinGPT - FinGPT: Open-Source Financial Large Language Models! Revolutionize 🔥 We release the trained model on HuggingFace.
difftastic - a structural diff that understands syntax 🟥🟩
awesome-totally-open-chatgpt - A list of totally open alternatives to ChatGPT
makeMoE - From scratch implementation of a sparse mixture of experts language model inspired by Andrej Karpathy's makemore :)
Meshtasticator - Discrete-event and interactive simulator for Meshtastic.
LaVague - Large Action Model framework to turn natural language into browser actions
Zicklein - Finetuning instruct-LLaMA on german datasets.