Finetune_LLMs
GoLLIE
Finetune_LLMs | GoLLIE | |
---|---|---|
2 | 1 | |
438 | 214 | |
- | 9.1% | |
8.5 | 9.6 | |
about 1 month ago | 29 days ago | |
Python | Python | |
GNU Affero General Public License v3.0 | Apache License 2.0 |
Stars - the number of stars that a project has on GitHub. Growth - month over month growth in stars.
Activity is a relative number indicating how actively a project is being developed. Recent commits have higher weight than older ones.
For example, an activity of 9.0 indicates that a project is amongst the top 10% of the most actively developed projects that we are tracking.
Finetune_LLMs
-
Prepare Dataset
Regarding this: if you have resources (at least Colab Pro), you would be much better off training GPT-J (aka GPT-J-6B). Not only it's 4x larger than the largest GPT-2, its architecture, AFAIK, is based on GPT-3. You can use this repo as a good example for GPT-J finetuning.
-
[D] Fine-tuning GPT-J: lessons learned
And this: https://github.com/mallorbc/Finetune_GPTNEO_GPTJ6B
GoLLIE
What are some alternatives?
DeepSpeed - DeepSpeed is a deep learning optimization library that makes distributed training and inference easy, efficient, and effective.
realtime-bakllava - llama.cpp with BakLLaVA model describes what does it see
mesh-transformer-jax - Model parallel transformers in JAX and Haiku
AtomGPT - 中英文预训练大模型,目标与ChatGPT的水平一致
code-llama-for-vscode - Use Code Llama with Visual Studio Code and the Continue extension. A local LLM alternative to GitHub Copilot.
AnglE - Angle-optimized Text Embeddings | 🔥 SOTA on STS and MTEB Leaderboard
LLMCompiler - [ICML 2024] LLMCompiler: An LLM Compiler for Parallel Function Calling
replicate-llama2-sms-chatbot
SqueezeLLM - [ICML 2024] SqueezeLLM: Dense-and-Sparse Quantization
go-llama2 - Llama 2 inference in one file of pure Go
api-for-open-llm - Openai style api for open large language models, using LLMs just as chatgpt! Support for LLaMA, LLaMA-2, BLOOM, Falcon, Baichuan, Qwen, Xverse, SqlCoder, CodeLLaMA, ChatGLM, ChatGLM2, ChatGLM3 etc. 开源大模型的统一后端接口