LLaMA-Adapter
Fine-tuning LLaMA to follow Instructions within 1 Hour and 1.2M Parameters [Moved to: https://github.com/OpenGVLab/LLaMA-Adapter] (by ZrrSkywalker)
LocalAI
:robot: The free, Open Source OpenAI alternative. Self-hosted, community-driven and local-first. Drop-in replacement for OpenAI running on consumer-grade hardware. No GPU required. Runs gguf, transformers, diffusers and many more models architectures. It allows to generate Text, Audio, Video, Images. Also with voice cloning capabilities. (by mudler)
LLaMA-Adapter | LocalAI | |
---|---|---|
16 | 83 | |
4,021 | 20,346 | |
- | 10.5% | |
9.4 | 9.9 | |
11 months ago | about 21 hours ago | |
Python | C++ | |
GNU General Public License v3.0 only | MIT License |
The number of mentions indicates the total number of mentions that we've tracked plus the number of user suggested alternatives.
Stars - the number of stars that a project has on GitHub. Growth - month over month growth in stars.
Activity is a relative number indicating how actively a project is being developed. Recent commits have higher weight than older ones.
For example, an activity of 9.0 indicates that a project is amongst the top 10% of the most actively developed projects that we are tracking.
Stars - the number of stars that a project has on GitHub. Growth - month over month growth in stars.
Activity is a relative number indicating how actively a project is being developed. Recent commits have higher weight than older ones.
For example, an activity of 9.0 indicates that a project is amongst the top 10% of the most actively developed projects that we are tracking.
LLaMA-Adapter
Posts with mentions or reviews of LLaMA-Adapter.
We have used some of these posts to build our list of alternatives
and similar projects. The last one was on 2023-06-09.
- Are you selfhosting a ChatGPT alternative?
-
Best general purpose model for commercial license?
Either LLaMA with Alpaca LoRA 65B, or LLaMA-Adapter-V2-65B chat demo. I haven't seen any tests of the 65B LLaMA-Adapter-V2, but they claim it's as good as ChatGPT when compared using GPT-4.
-
LLaMA-Adapter V2: fine-tuned LLaMA 65B for visual instruction, and LLaMA Chat65B trained with ShareGPT data for chatting. Chat65B model has been released.
Chat65B: https://github.com/ZrrSkywalker/LLaMA-Adapter/tree/main/llama_adapter_v2_chat65b
-
LLaMA-Adapter V2: Parameter-Efficient Visual Instruction Model
How to efficiently transform large language models (LLMs) into instruction followers is recently a popular research direction, while training LLM for multi-modal reasoning remains less explored. Although the recent LLaMA-Adapter demonstrates the potential to handle visual inputs with LLMs, it still cannot generalize well to open-ended visual instructions and lags behind GPT-4. In this paper, we present LLaMA-Adapter V2, a parameter-efficient visual instruction model. Specifically, we first augment LLaMA-Adapter by unlocking more learnable parameters (e.g., norm, bias and scale), which distribute the instruction-following ability across the entire LLaMA model besides adapters. Secondly, we propose an early fusion strategy to feed visual tokens only into the early LLM layers, contributing to better visual knowledge incorporation. Thirdly, a joint training paradigm of image-text pairs and instruction-following data is introduced by optimizing disjoint groups of learnable parameters. This strategy effectively alleviates the interference between the two tasks of image-text alignment and instruction following and achieves strong multi-modal reasoning with only a small-scale image-text and instruction dataset. During inference, we incorporate additional expert models (e.g. captioning/OCR systems) into LLaMA-Adapter to further enhance its image understanding capability without incurring training costs. Compared to the original LLaMA-Adapter, our LLaMA-Adapter V2 can perform open-ended multi-modal instructions by merely introducing 14M parameters over LLaMA. The newly designed framework also exhibits stronger language-only instruction-following capabilities and even excels in chat interactions. Our code and models are available at https://github.com/ZrrSkywalker/LLaMA-Adapter.
- Surpasses ChatGPT on Some Tasks
- [News] This language model surpasses ChatGPT on some prompts
-
Meet LLaMA-Adapter: A Lightweight Adaption Method For Fine-Tuning Instruction-Following LLaMA Models Using 52K Data Provided By Stanford Alpaca
Quick Read: https://www.marktechpost.com/2023/03/31/meet-llama-adapter-a-lightweight-adaption-method-for-fine-tuning-instruction-following-llama-models-using-52k-data-provided-by-stanford-alpaca/ Paper: https://arxiv.org/pdf/2303.16199.pdf Github: https://github.com/ZrrSkywalker/LLaMA-Adapter
- LLaMA-Adapter: Efficient Fine-Tuning of LLaMA
-
[R] LLaMA-Adapter: Efficient Fine-tuning of Language Models with Zero-init Attention
Found relevant code at https://github.com/ZrrSkywalker/LLaMA-Adapter + all code implementations here
- You can now fine-tune LLaMA to follow instructions within ONE hour
LocalAI
Posts with mentions or reviews of LocalAI.
We have used some of these posts to build our list of alternatives
and similar projects. The last one was on 2024-01-19.
- LocalAI: Self-hosted OpenAI alternative reaches 2.14.0
- Drop-In Replacement for ChatGPT API
- Voxos.ai – An Open-Source Desktop Voice Assistant
- Ask HN: Set Up Local LLM
- FLaNK Stack Weekly 11 Dec 2023
- Is there any open source app to load a model and expose API like OpenAI?
-
What do you use to run your models?
If you're running this as a server, I would recommend LocalAI https://github.com/mudler/LocalAI
-
OpenAI Switch Kit: Swap OpenAI with any open-source model
LocalAI can do that: https://github.com/mudler/LocalAI
https://localai.io/features/openai-functions/
-
"ChatGPT romanesc"
De inspirație, LocalAI, un replacement la OpenAI. E deja hot pe GitHub.
-
Local LLM's to run on old iMac / Hardware
Your hardware should be fine for inferencing, as long as you don't bother trying to get the GPU working.
My $0.02 would be to try getting LocalAI running on your machine with OpenCL/CLBlas acceleration for your CPU. If you're running other things, you could limit the inferencing process to 2 or 3 threads. That should get it working; I've been able to inference even 13b models on cheap Rockchip SOCs. Your CPU should be fine, even if it's a little outdated.
LocalAI: https://github.com/mudler/LocalAI
Some decent models to start with:
TinyLlama (extremely small/fast): https://huggingface.co/TheBloke/TinyLlama-1.1B-Chat-v0.3-GGU...
Dolphin Mistral (larger size, better responses: https://huggingface.co/TheBloke/dolphin-2.1-mistral-7B-GGUF