LLM-Adapters
LLM-Finetuning-Hub
LLM-Adapters | LLM-Finetuning-Hub | |
---|---|---|
2 | 6 | |
963 | 638 | |
4.4% | - | |
7.3 | 9.5 | |
2 months ago | about 1 month ago | |
Python | Python | |
Apache License 2.0 | Apache License 2.0 |
Stars - the number of stars that a project has on GitHub. Growth - month over month growth in stars.
Activity is a relative number indicating how actively a project is being developed. Recent commits have higher weight than older ones.
For example, an activity of 9.0 indicates that a project is amongst the top 10% of the most actively developed projects that we are tracking.
LLM-Adapters
-
Google DeepMind CEO Says Some Form of AGI Possible in a Few Years
That is not true, you can for example use an additional adapter to optimize, that takes 50$ and a 1 hour. https://github.com/AGI-Edgerunners/LLM-Adapters
- LLM-Adapters: An Adapter Family for Parameter-Efficient Fine-Tuning of LLMs
LLM-Finetuning-Hub
-
Zephyr-7B QLoRA Benchmark for Summarization and Classification
Hi everyone, we've been working on benchmarking different open-source LLMs. We measure, in particular, on the performance of these models once finetued (via QLoRA) on classic NLP downstream tasks like summarization and classification. We also put particular emphasis on benchmarking inference time/cost for these models once deployed.
-
Show HN: Finetuning LLMs: Open-source vs. Close-source
Hello all,
I have been working on benchmarking different LLMs -- both open-source and closed-source.
Repo: https://github.com/georgian-io/LLM-Finetuning-Hub
Precisely, I am comparing their out-of-the-box capabilities (prompting) and their fine-tuned conterparts!
So far, the following models have been benchmarked:
Open-Source:
- FLaNK Stack Weekly for 12 September 2023
- [P][R] Finetune LLMs via the Finetuning Hub
-
Show HN: Leverage Falcon 7B blog post
- Finetuning with QLoRA
I evaluate how Falcon does on classification tasks when compared to Bert and Distilbert.
Moreover, I talk about different ways you can deploy the model, and the associated costs!
The code for all of my experiments are available on: https://github.com/georgian-io/LLM-Finetuning-Hub
Happy reading and learning!
- Show HN: LLM Finetuning Hub
What are some alternatives?
TencentPretrain - Tencent Pre-training framework in PyTorch & Pre-trained Model Zoo
ChatDev - Create Customized Software using Natural Language Idea (through LLM-powered Multi-Agent Collaboration)
discus - A data-centric AI package for ML/AI. Get the best high-quality data for the best results. Discord: https://discord.gg/t6ADqBKrdZ
bedframe - Your Browser Extension Development Framework
custom-diffusion - Custom Diffusion: Multi-Concept Customization of Text-to-Image Diffusion (CVPR 2023)
wasmer-java - ☕ WebAssembly runtime for Java
hierarchical-domain-adaptation - Code of NAACL 2022 "Efficient Hierarchical Domain Adaptation for Pretrained Language Models" paper.
llm-toys - Small(7B and below) finetuned LLMs for a diverse set of useful tasks
AGIEval
go-llama2 - Llama 2 inference in one file of pure Go
adapters - A Unified Library for Parameter-Efficient and Modular Transfer Learning
sqllineage - SQL Lineage Analysis Tool powered by Python