bench-warmers
LLaMA-Adapter
Our great sponsors
bench-warmers | LLaMA-Adapter | |
---|---|---|
6 | 16 | |
54 | 4,021 | |
- | - | |
9.7 | 9.4 | |
16 days ago | 11 months ago | |
Python | Python | |
MIT License | GNU General Public License v3.0 only |
Stars - the number of stars that a project has on GitHub. Growth - month over month growth in stars.
Activity is a relative number indicating how actively a project is being developed. Recent commits have higher weight than older ones.
For example, an activity of 9.0 indicates that a project is amongst the top 10% of the most actively developed projects that we are tracking.
bench-warmers
-
What to do next?
i have more ideas than I know what to do with, help yourself: https://github.com/dmarx/bench-warmers
-
Any ideas for NLP end-to-end projects or blogs for a beginner with a linguistics background to boost their CV?
you're welcome to help yourself to my ideas (no guarantees that they're any good or even comprehensible, I do a lot of my brainstorming while high). here's my brainstorming space, scroll down for a categorized ToC: https://github.com/dmarx/bench-warmers
-
[R] LLaMA-Adapter: Efficient Fine-tuning of Language Models with Zero-init Attention
I've decided to just lean into it and am literally just giving my ideas away. https://github.com/dmarx/bench-warmers
-
Using Github to write my notes has helped me retain knowledge immensely.
it might sound like a lot, but it's actually really lightweight and easy to use. Check it out: https://github.com/dmarx/bench-warmers
-
We are the developers behind pandas, currently preparing for the 2.0 release :) AMA
you've sort of become victims of your own success: as another pandas dev mentioned, you want to preserve backwards compatibility and this significantly complicates any restructuring. I'm sympathetic and am not sure what the best solution here would be. I had this idea last night but i'm not sure I like this approach either.
-
Need help on finding an area where machine learning is applicable on day-to-day life but not implemented already
To be clear, i'm talking about e.g. vision impaired, hearing impaired, etc. Here's an example of a project idea in this space (possibly a bit more ambitious than what you're looking for but if you think you could tackle this I encourage you to take a stab at it): https://github.com/dmarx/bench-warmers/blob/main/automated-video-description.md
LLaMA-Adapter
- Are you selfhosting a ChatGPT alternative?
-
Best general purpose model for commercial license?
Either LLaMA with Alpaca LoRA 65B, or LLaMA-Adapter-V2-65B chat demo. I haven't seen any tests of the 65B LLaMA-Adapter-V2, but they claim it's as good as ChatGPT when compared using GPT-4.
-
LLaMA-Adapter V2: fine-tuned LLaMA 65B for visual instruction, and LLaMA Chat65B trained with ShareGPT data for chatting. Chat65B model has been released.
Chat65B: https://github.com/ZrrSkywalker/LLaMA-Adapter/tree/main/llama_adapter_v2_chat65b
-
LLaMA-Adapter V2: Parameter-Efficient Visual Instruction Model
How to efficiently transform large language models (LLMs) into instruction followers is recently a popular research direction, while training LLM for multi-modal reasoning remains less explored. Although the recent LLaMA-Adapter demonstrates the potential to handle visual inputs with LLMs, it still cannot generalize well to open-ended visual instructions and lags behind GPT-4. In this paper, we present LLaMA-Adapter V2, a parameter-efficient visual instruction model. Specifically, we first augment LLaMA-Adapter by unlocking more learnable parameters (e.g., norm, bias and scale), which distribute the instruction-following ability across the entire LLaMA model besides adapters. Secondly, we propose an early fusion strategy to feed visual tokens only into the early LLM layers, contributing to better visual knowledge incorporation. Thirdly, a joint training paradigm of image-text pairs and instruction-following data is introduced by optimizing disjoint groups of learnable parameters. This strategy effectively alleviates the interference between the two tasks of image-text alignment and instruction following and achieves strong multi-modal reasoning with only a small-scale image-text and instruction dataset. During inference, we incorporate additional expert models (e.g. captioning/OCR systems) into LLaMA-Adapter to further enhance its image understanding capability without incurring training costs. Compared to the original LLaMA-Adapter, our LLaMA-Adapter V2 can perform open-ended multi-modal instructions by merely introducing 14M parameters over LLaMA. The newly designed framework also exhibits stronger language-only instruction-following capabilities and even excels in chat interactions. Our code and models are available at https://github.com/ZrrSkywalker/LLaMA-Adapter.
- Surpasses ChatGPT on Some Tasks
- [News] This language model surpasses ChatGPT on some prompts
-
Meet LLaMA-Adapter: A Lightweight Adaption Method For Fine-Tuning Instruction-Following LLaMA Models Using 52K Data Provided By Stanford Alpaca
Quick Read: https://www.marktechpost.com/2023/03/31/meet-llama-adapter-a-lightweight-adaption-method-for-fine-tuning-instruction-following-llama-models-using-52k-data-provided-by-stanford-alpaca/ Paper: https://arxiv.org/pdf/2303.16199.pdf Github: https://github.com/ZrrSkywalker/LLaMA-Adapter
- LLaMA-Adapter: Efficient Fine-Tuning of LLaMA
-
[R] LLaMA-Adapter: Efficient Fine-tuning of Language Models with Zero-init Attention
Found relevant code at https://github.com/ZrrSkywalker/LLaMA-Adapter + all code implementations here
- You can now fine-tune LLaMA to follow instructions within ONE hour
What are some alternatives?
khoj - Your AI second brain. A copilot to get answers to your questions, whether they be from your own notes or from the internet. Use powerful, online (e.g gpt4) or private, local (e.g mistral) LLMs. Self-host locally or use our web app. Access from Obsidian, Emacs, Desktop app, Web or Whatsapp.
LoRA - Code for loralib, an implementation of "LoRA: Low-Rank Adaptation of Large Language Models"
notes
gpt4all - gpt4all: run open-source LLMs anywhere
LLaMA-Adapter - [ICLR 2024] Fine-tuning LLaMA to follow Instructions within 1 Hour and 1.2M Parameters
chatgpt-telegram-bot - 🤖 A Telegram bot that integrates with OpenAI's official ChatGPT APIs to provide answers, written in Python
python-bigquery-pandas - Google BigQuery connector for pandas
text-generation-webui-docker - Docker variants of oobabooga's text-generation-webui, including pre-built images.
pandas-stubs - Public type stubs for pandas
open_llama - OpenLLaMA, a permissively licensed open source reproduction of Meta AI’s LLaMA 7B trained on the RedPajama dataset
obsidian-omnisearch - A search engine that "just works" for Obsidian. Supports OCR and PDF indexing.
scikit-learn - scikit-learn: machine learning in Python