SaaSHub helps you find the best software and product alternatives Learn more →
LLaMA-Adapter Alternatives
Similar projects and alternatives to LLaMA-Adapter
-
text-generation-webui
A Gradio web UI for Large Language Models. Supports transformers, GPTQ, AWQ, EXL2, llama.cpp (GGUF), Llama models.
-
WorkOS
The modern identity platform for B2B SaaS. The APIs are flexible and easy-to-use, supporting authentication, user identity, and complex enterprise features like SSO and SCIM provisioning.
-
guidance
Discontinued A guidance language for controlling large language models. [Moved to: https://github.com/guidance-ai/guidance] (by microsoft)
-
LocalAI
:robot: The free, Open Source OpenAI alternative. Self-hosted, community-driven and local-first. Drop-in replacement for OpenAI running on consumer-grade hardware. No GPU required. Runs gguf, transformers, diffusers and many more models architectures. It allows to generate Text, Audio, Video, Images. Also with voice cloning capabilities.
-
InfluxDB
Power Real-Time Data Analytics at Scale. Get real-time insights from all types of time series data with InfluxDB. Ingest, query, and analyze billions of data points in real-time with unbounded cardinality.
-
open_llama
OpenLLaMA, a permissively licensed open source reproduction of Meta AI’s LLaMA 7B trained on the RedPajama dataset
-
serge
A web interface for chatting with Alpaca through llama.cpp. Fully dockerized, with an easy to use API.
-
text-generation-webui-docker
Docker variants of oobabooga's text-generation-webui, including pre-built images.
-
fuseai
Self-Hosted and Open-Source web app to interact with OpenAI APIs. Currently supports ChatGPT, but DALLE and Whisper support is coming.
-
chatgpt-telegram-bot
🤖 A Telegram bot that integrates with OpenAI's official ChatGPT APIs to provide answers, written in Python (by n3d1117)
-
SaaSHub
SaaSHub - Software Alternatives and Reviews. SaaSHub helps you find the best software and product alternatives
LLaMA-Adapter reviews and mentions
- Are you selfhosting a ChatGPT alternative?
-
Best general purpose model for commercial license?
Either LLaMA with Alpaca LoRA 65B, or LLaMA-Adapter-V2-65B chat demo. I haven't seen any tests of the 65B LLaMA-Adapter-V2, but they claim it's as good as ChatGPT when compared using GPT-4.
-
LLaMA-Adapter V2: fine-tuned LLaMA 65B for visual instruction, and LLaMA Chat65B trained with ShareGPT data for chatting. Chat65B model has been released.
Chat65B: https://github.com/ZrrSkywalker/LLaMA-Adapter/tree/main/llama_adapter_v2_chat65b
-
LLaMA-Adapter V2: Parameter-Efficient Visual Instruction Model
How to efficiently transform large language models (LLMs) into instruction followers is recently a popular research direction, while training LLM for multi-modal reasoning remains less explored. Although the recent LLaMA-Adapter demonstrates the potential to handle visual inputs with LLMs, it still cannot generalize well to open-ended visual instructions and lags behind GPT-4. In this paper, we present LLaMA-Adapter V2, a parameter-efficient visual instruction model. Specifically, we first augment LLaMA-Adapter by unlocking more learnable parameters (e.g., norm, bias and scale), which distribute the instruction-following ability across the entire LLaMA model besides adapters. Secondly, we propose an early fusion strategy to feed visual tokens only into the early LLM layers, contributing to better visual knowledge incorporation. Thirdly, a joint training paradigm of image-text pairs and instruction-following data is introduced by optimizing disjoint groups of learnable parameters. This strategy effectively alleviates the interference between the two tasks of image-text alignment and instruction following and achieves strong multi-modal reasoning with only a small-scale image-text and instruction dataset. During inference, we incorporate additional expert models (e.g. captioning/OCR systems) into LLaMA-Adapter to further enhance its image understanding capability without incurring training costs. Compared to the original LLaMA-Adapter, our LLaMA-Adapter V2 can perform open-ended multi-modal instructions by merely introducing 14M parameters over LLaMA. The newly designed framework also exhibits stronger language-only instruction-following capabilities and even excels in chat interactions. Our code and models are available at https://github.com/ZrrSkywalker/LLaMA-Adapter.
- Surpasses ChatGPT on Some Tasks
- [News] This language model surpasses ChatGPT on some prompts
-
Meet LLaMA-Adapter: A Lightweight Adaption Method For Fine-Tuning Instruction-Following LLaMA Models Using 52K Data Provided By Stanford Alpaca
Quick Read: https://www.marktechpost.com/2023/03/31/meet-llama-adapter-a-lightweight-adaption-method-for-fine-tuning-instruction-following-llama-models-using-52k-data-provided-by-stanford-alpaca/ Paper: https://arxiv.org/pdf/2303.16199.pdf Github: https://github.com/ZrrSkywalker/LLaMA-Adapter
- LLaMA-Adapter: Efficient Fine-Tuning of LLaMA
-
[R] LLaMA-Adapter: Efficient Fine-tuning of Language Models with Zero-init Attention
Found relevant code at https://github.com/ZrrSkywalker/LLaMA-Adapter + all code implementations here
- You can now fine-tune LLaMA to follow instructions within ONE hour
-
A note from our sponsor - SaaSHub
www.saashub.com | 29 Apr 2024
Stats
ZrrSkywalker/LLaMA-Adapter is an open source project licensed under GNU General Public License v3.0 only which is an OSI approved license.
The primary programming language of LLaMA-Adapter is Python.
Sponsored