LLM-Finetuning-Hub
llm-toys
LLM-Finetuning-Hub | llm-toys | |
---|---|---|
6 | 2 | |
638 | 115 | |
- | - | |
9.5 | 7.2 | |
about 1 month ago | 10 months ago | |
Python | Python | |
Apache License 2.0 | Apache License 2.0 |
Stars - the number of stars that a project has on GitHub. Growth - month over month growth in stars.
Activity is a relative number indicating how actively a project is being developed. Recent commits have higher weight than older ones.
For example, an activity of 9.0 indicates that a project is amongst the top 10% of the most actively developed projects that we are tracking.
LLM-Finetuning-Hub
-
Zephyr-7B QLoRA Benchmark for Summarization and Classification
Hi everyone, we've been working on benchmarking different open-source LLMs. We measure, in particular, on the performance of these models once finetued (via QLoRA) on classic NLP downstream tasks like summarization and classification. We also put particular emphasis on benchmarking inference time/cost for these models once deployed.
-
Show HN: Finetuning LLMs: Open-source vs. Close-source
Hello all,
I have been working on benchmarking different LLMs -- both open-source and closed-source.
Repo: https://github.com/georgian-io/LLM-Finetuning-Hub
Precisely, I am comparing their out-of-the-box capabilities (prompting) and their fine-tuned conterparts!
So far, the following models have been benchmarked:
Open-Source:
- FLaNK Stack Weekly for 12 September 2023
- [P][R] Finetune LLMs via the Finetuning Hub
-
Show HN: Leverage Falcon 7B blog post
- Finetuning with QLoRA
I evaluate how Falcon does on classification tasks when compared to Bert and Distilbert.
Moreover, I talk about different ways you can deploy the model, and the associated costs!
The code for all of my experiments are available on: https://github.com/georgian-io/LLM-Finetuning-Hub
Happy reading and learning!
- Show HN: LLM Finetuning Hub
llm-toys
-
How to fine tune llama2?
You can use the train script here https://github.com/kuutsav/llm-toys/blob/main/llm_toys/train.py. The readme contains a sample training command.
-
[P] Finetuning qLoRAs for production use cases - Paraphrasing, Changing the tone of a sentence, Dialogue Summarization and Topic generation
All the details can be found here: https://github.com/kuutsav/llm-toys.
What are some alternatives?
ChatDev - Create Customized Software using Natural Language Idea (through LLM-powered Multi-Agent Collaboration)
nanoChatGPT - nanogpt turned into a chat model
bedframe - Your Browser Extension Development Framework
DB-GPT-Hub - A repository that contains models, datasets, and fine-tuning techniques for DB-GPT, with the purpose of enhancing model performance in Text-to-SQL
wasmer-java - ☕ WebAssembly runtime for Java
unsloth - Finetune Llama 3, Mistral & Gemma LLMs 2-5x faster with 80% less memory
go-llama2 - Llama 2 inference in one file of pure Go
llama-recipes - Scripts for fine-tuning Meta Llama3 with composable FSDP & PEFT methods to cover single/multi-node GPUs. Supports default & custom datasets for applications such as summarization and Q&A. Supporting a number of candid inference solutions such as HF TGI, VLLM for local or cloud deployment. Demo apps to showcase Meta Llama3 for WhatsApp & Messenger.
sqllineage - SQL Lineage Analysis Tool powered by Python
Zicklein - Finetuning instruct-LLaMA on german datasets.
rivet - The open-source visual AI programming environment and TypeScript library
marvin - ✨ Build AI interfaces that spark joy