chatgpt-telegram-bot
LLaMA-Adapter
chatgpt-telegram-bot | LLaMA-Adapter | |
---|---|---|
3 | 16 | |
2,978 | 4,021 | |
- | - | |
7.8 | 9.4 | |
about 2 months ago | about 1 year ago | |
Python | Python | |
GNU General Public License v3.0 only | GNU General Public License v3.0 only |
Stars - the number of stars that a project has on GitHub. Growth - month over month growth in stars.
Activity is a relative number indicating how actively a project is being developed. Recent commits have higher weight than older ones.
For example, an activity of 9.0 indicates that a project is amongst the top 10% of the most actively developed projects that we are tracking.
chatgpt-telegram-bot
- Are you selfhosting a ChatGPT alternative?
-
ChatGPT Anywhere Through SMS
why not use a telegram bot? there are plenty of ready to use chatgpt-telegram bots, e.g. https://github.com/n3d1117/chatgpt-telegram-bot
-
Show HN: ChatGPT Inline Bot on Telegram
Lol, I just literally deployed an implementation from a GitHub repo[0] for free on Fly.io hours ago. This way I can also check the code and just pay for what I use. Seems like a low-hanging fruit to leverage on people who are not that into tech that much.
[0]: https://github.com/n3d1117/chatgpt-telegram-bot
LLaMA-Adapter
- Are you selfhosting a ChatGPT alternative?
-
Best general purpose model for commercial license?
Either LLaMA with Alpaca LoRA 65B, or LLaMA-Adapter-V2-65B chat demo. I haven't seen any tests of the 65B LLaMA-Adapter-V2, but they claim it's as good as ChatGPT when compared using GPT-4.
-
LLaMA-Adapter V2: fine-tuned LLaMA 65B for visual instruction, and LLaMA Chat65B trained with ShareGPT data for chatting. Chat65B model has been released.
Chat65B: https://github.com/ZrrSkywalker/LLaMA-Adapter/tree/main/llama_adapter_v2_chat65b
-
LLaMA-Adapter V2: Parameter-Efficient Visual Instruction Model
How to efficiently transform large language models (LLMs) into instruction followers is recently a popular research direction, while training LLM for multi-modal reasoning remains less explored. Although the recent LLaMA-Adapter demonstrates the potential to handle visual inputs with LLMs, it still cannot generalize well to open-ended visual instructions and lags behind GPT-4. In this paper, we present LLaMA-Adapter V2, a parameter-efficient visual instruction model. Specifically, we first augment LLaMA-Adapter by unlocking more learnable parameters (e.g., norm, bias and scale), which distribute the instruction-following ability across the entire LLaMA model besides adapters. Secondly, we propose an early fusion strategy to feed visual tokens only into the early LLM layers, contributing to better visual knowledge incorporation. Thirdly, a joint training paradigm of image-text pairs and instruction-following data is introduced by optimizing disjoint groups of learnable parameters. This strategy effectively alleviates the interference between the two tasks of image-text alignment and instruction following and achieves strong multi-modal reasoning with only a small-scale image-text and instruction dataset. During inference, we incorporate additional expert models (e.g. captioning/OCR systems) into LLaMA-Adapter to further enhance its image understanding capability without incurring training costs. Compared to the original LLaMA-Adapter, our LLaMA-Adapter V2 can perform open-ended multi-modal instructions by merely introducing 14M parameters over LLaMA. The newly designed framework also exhibits stronger language-only instruction-following capabilities and even excels in chat interactions. Our code and models are available at https://github.com/ZrrSkywalker/LLaMA-Adapter.
- Surpasses ChatGPT on Some Tasks
- [News] This language model surpasses ChatGPT on some prompts
-
Meet LLaMA-Adapter: A Lightweight Adaption Method For Fine-Tuning Instruction-Following LLaMA Models Using 52K Data Provided By Stanford Alpaca
Quick Read: https://www.marktechpost.com/2023/03/31/meet-llama-adapter-a-lightweight-adaption-method-for-fine-tuning-instruction-following-llama-models-using-52k-data-provided-by-stanford-alpaca/ Paper: https://arxiv.org/pdf/2303.16199.pdf Github: https://github.com/ZrrSkywalker/LLaMA-Adapter
- LLaMA-Adapter: Efficient Fine-Tuning of LLaMA
-
[R] LLaMA-Adapter: Efficient Fine-tuning of Language Models with Zero-init Attention
Found relevant code at https://github.com/ZrrSkywalker/LLaMA-Adapter + all code implementations here
- You can now fine-tune LLaMA to follow instructions within ONE hour
What are some alternatives?
chatgpt-telegram-bot - This is a Telegram bot that uses ChatGPT to generate responses to messages.
LoRA - Code for loralib, an implementation of "LoRA: Low-Rank Adaptation of Large Language Models"
LLaMA-Adapter - [ICLR 2024] Fine-tuning LLaMA to follow Instructions within 1 Hour and 1.2M Parameters
gpt4all - GPT4All: Run Local LLMs on Any Device. Open-source and available for commercial use.
telemage - DALL-E Telegram Bot on Deta Space
bench-warmers - DigThatData's Public Brainstorming space
ChatGPT-RedditBot - The ChatGPT-RedditBot is a Reddit bot that uses the ChatGPT large language model to generate engaging responses to Reddit threads and submissions.
text-generation-webui-docker - Docker variants of oobabooga's text-generation-webui, including pre-built images.
telegram_wakeonlan_bot - WOL bot Simple telegram bot to wake up computers on your net
open_llama - OpenLLaMA, a permissively licensed open source reproduction of Meta AI’s LLaMA 7B trained on the RedPajama dataset
openai-quickstart-node - Node.js example app from the OpenAI API quickstart tutorial
LocalAI - :robot: The free, Open Source alternative to OpenAI, Claude and others. Self-hosted and local-first. Drop-in replacement for OpenAI, running on consumer-grade hardware. No GPU required. Runs gguf, transformers, diffusers and many more models architectures. Features: Generate Text, Audio, Video, Images, Voice Cloning, Distributed inference