big-AGI
S-LoRA
big-AGI | S-LoRA | |
---|---|---|
8 | 4 | |
4,379 | 1,509 | |
- | 7.2% | |
10.0 | 7.1 | |
6 days ago | 4 months ago | |
TypeScript | Python | |
MIT License | Apache License 2.0 |
Stars - the number of stars that a project has on GitHub. Growth - month over month growth in stars.
Activity is a relative number indicating how actively a project is being developed. Recent commits have higher weight than older ones.
For example, an activity of 9.0 indicates that a project is amongst the top 10% of the most actively developed projects that we are tracking.
big-AGI
- GPT-4 Turbo with Vision is a step backwards for coding
- LM Studio – Discover, download, and run local LLMs
- ChatGPT Voice Announced (By Greg Brockman)
-
Is there an easy way to use the March 14th model ChatGPT-4 model?
You could run a local copy of big-agi and edit this file to use the dated model.
-
Managing ChatGPT token/character limits tool
As ChatGPT counts tokens for both inputs and output, you'd have to be thoughtful for the input side as well so this is where the tool would come in handy within the same chat session. I don't think it remembers separate chat sessions so yeah without access to history, you'd need to either start a new chat session each time or setup your own ChatGPT instance and use your own OpenAI API token and be billed. For my own ChatGPT instance I use awesome app at https://github.com/enricoros/nextjs-chatgpt-app which has a demo on the readme page too.
-
Make your own ChatGPT UI
nice, maybe you can make the initializing prompts in
https://github.com/enricoros/nextjs-chatgpt-app/blob/main/pa...
transparent to the user and even changeable by them.
S-LoRA
-
Representation Engineering: Mistral-7B on Acid
You can also batch requests using different LoRAs. See "S-LoRA: Serving Thousands of Concurrent LoRA Adapters". https://arxiv.org/abs/2311.03285
- S-LoRA: Serving Concurrent LoRA Adapters
-
LM Studio – Discover, download, and run local LLMs
Depending on what you mean by "production" you'll probably want to look at "real" serving implementations like HF TGI, vLLM, lmdeploy, Triton Inference Server (tensorrt-llm), etc. There are also more bespoke implementations for things like serving large numbers of LoRA adapters[0].
These are heavily optimized for more efficient memory usage, performance, and responsiveness when serving large numbers of concurrent requests/users in addition to things like model versioning/hot load/reload/etc, Prometheus metrics, things like that.
One major difference is at this level a lot of the more aggressive memory optimization techniques and support for CPU aren't even considered. Generally speaking you get GPTQ and possibly AWQ quantization + their optimizations + CUDA only. Their target users and their use cases are often using A100/H100 and just trying to need fewer of them. Support for lower VRAM cards, older CUDA compute architectures, etc come secondary to that (for the most part).
[0] - https://github.com/S-LoRA/S-LoRA
- GitHub - S-LoRA/S-LoRA: S-LoRA: Serving Thousands of Concurrent LoRA Adapters
What are some alternatives?
text-generation-webui - A Gradio web UI for Large Language Models. Supports transformers, GPTQ, AWQ, EXL2, llama.cpp (GGUF), Llama models.
Lobe Chat - LobeChat is a open-source, extensible (Function Calling), high-performance chatbot framework.It supports one-click free deployment of your private ChatGPT/LLM web application.
hoof - "Just hoof it!" - A spotlight like interface to Ollama
chatgpt-demo - Minimal web UI for ChatGPT.
FastChat - An open platform for training, serving, and evaluating large language models. Release repo for Vicuna and Chatbot Arena.
fill - Generative fill in 3D.
SillyTavern - LLM Frontend for Power Users.
chatbot-ui - AI chat for every model.
ollama-webui - ChatGPT-Style WebUI for LLMs (Formerly Ollama WebUI) [Moved to: https://github.com/open-webui/open-webui]
gpt-react-designer - ⚡️ Generate and preview ⚛️ React components with 🤖 ChatGPT
next-enterprise - 💼 An enterprise-grade Next.js boilerplate for high-performance, maintainable apps. Packed with features like Tailwind CSS, TypeScript, ESLint, Prettier, testing tools, and more to accelerate your development.