gpt-j-fine-tuning-example
FastLoRAChat
gpt-j-fine-tuning-example | FastLoRAChat | |
---|---|---|
1 | 2 | |
63 | 119 | |
- | - | |
10.0 | 7.2 | |
over 1 year ago | about 1 year ago | |
Jupyter Notebook | Jupyter Notebook | |
- | Apache License 2.0 |
Stars - the number of stars that a project has on GitHub. Growth - month over month growth in stars.
Activity is a relative number indicating how actively a project is being developed. Recent commits have higher weight than older ones.
For example, an activity of 9.0 indicates that a project is amongst the top 10% of the most actively developed projects that we are tracking.
gpt-j-fine-tuning-example
-
getting Mycroft AI to work with...AI?
You can also fine-tune these, https://www.forefront.ai/blog-posts/how-to-fine-tune-gpt-j https://github.com/gustavecortal/gpt-j-fine-tuning-example
FastLoRAChat
-
[P] FastLoRAChat Instruct-tune LLaMA on consumer hardware with shareGPT data
Announcing FastLoRAChat , training chatGPT without A100.
- FastLoRAChat – Lora finetuned LLM with ChatGPT capabality
What are some alternatives?
text-generation-webui-colab - A colab gradio web UI for running Large Language Models
ragas - Evaluation framework for your Retrieval Augmented Generation (RAG) pipelines
comfyui-colab - comfyui colabs templates new nodes
lora-instruct - Finetune Falcon, LLaMA, MPT, and RedPajama on consumer hardware using PEFT LoRA
Dreambooth - Fine-tuning of diffusion models
hyde - HyDE: Precise Zero-Shot Dense Retrieval without Relevance Labels
machine-learning-experiments - 🤖 Interactive Machine Learning experiments: 🏋️models training + 🎨models demo
ReAct - [ICLR 2023] ReAct: Synergizing Reasoning and Acting in Language Models
BLOOM-fine-tuning - Finetune BLOOM
llama2-haystack - Using Llama2 with Haystack, the NLP/LLM framework.
whisper-youtube - 🔉 Youtube Videos Transcription with OpenAI's Whisper
alpaca-lora - Instruct-tune LLaMA on consumer hardware