alpaca.cpp
text-generation-webui
Our great sponsors
alpaca.cpp | text-generation-webui | |
---|---|---|
94 | 3 | |
9,878 | 5 | |
- | - | |
9.4 | 9.0 | |
about 1 year ago | about 1 year ago | |
C | Python | |
MIT License | GNU Affero General Public License v3.0 |
Stars - the number of stars that a project has on GitHub. Growth - month over month growth in stars.
Activity is a relative number indicating how actively a project is being developed. Recent commits have higher weight than older ones.
For example, an activity of 9.0 indicates that a project is amongst the top 10% of the most actively developed projects that we are tracking.
alpaca.cpp
-
LLaMA Now Goes Faster on CPUs
Where's the 30B-in-6GB claim? ^FGB in your GH link finds [0] which is neither by jart nor by ggerganov but by another user who promptly gets told to look at [1] where Justine denies that claim.
[0] https://github.com/antimatter15/alpaca.cpp/issues/182
-
Is there potential to short NVDA?
You can just download the language model, dude!!! Everyone doesn’t need to make their own and the open source models literally get better every day.
- [Oobabooga] Alpaca.cpp est extrêmement simple à travailler.
-
Hollywood’s Screenwriters Are Right to Fear AI
Alpaca
-
Square Enix’s AI Tech Demo Is a Staggering Failure
Square could have also trained a more specific data source for their NLP, very similar to Alpaca. Alpaca was trained from interactions from a larger dataset. So while it isn't as smart, it's still able to understand instructions and act upon them.
- [Singularity] Ich bin Alpaka 13B - Frag mich alles
-
Alpaca Vs. Final Jeopardy
The model I found was in 8 parts. The alpaca.cpp chat client (chat.cpp) needs to be modified to run the 8 part model, documented here: https://github.com/antimatter15/alpaca.cpp/issues/149
-
LocalAI: OpenAI compatible API to run LLM models locally on consumer grade hardware!
try the instructions on this github repo https://github.com/antimatter15/alpaca.cpp, its not the best one but I was able to run this model on my linux machine with 16GB memory, I think its a good starting point.
-
What educational materials do you think would be most useful during/after collapse?
Doesn't run offline. If you're running something without a beefy-ish GPU, there's https://github.com/antimatter15/alpaca.cpp .
-
ChatGPT Reignited My Passion For Coding
Ye, atm. toying with alpaca 7B/13B in a local install.
text-generation-webui
-
[Nvidia] Guide: Getting llama-7b 4bit running in simple(ish?) steps!
You will need the latest git version, not v0.1 release. (https://github.com/TheTerrasque/text-generation-webui -> code -> download zip) - that holds the (first) official lora support code from the webui project, but I haven't tested it much.
-
Gibberish with LLaMa 7B 4bit
git clone https://github.com/TheTerrasque/text-generation-webui.git
-
How to install LLaMA: 8-bit and 4-bit
I've put together a more automated setup, maybe you'll have more luck with that: https://github.com/TheTerrasque/text-generation-webui
What are some alternatives?
gpt4all - gpt4all: run open-source LLMs anywhere
text-generation-webui - A Gradio web UI for Large Language Models. Supports transformers, GPTQ, AWQ, EXL2, llama.cpp (GGUF), Llama models.
llama.cpp - LLM inference in C/C++
bitsandbytes - Accessible large language models via k-bit quantization for PyTorch.
coral-pi-rest-server - Perform inferencing of tensorflow-lite models on an RPi with acceleration from Coral USB stick
docker - Docker - the open-source application container engine
ggml - Tensor library for machine learning
stable-diffusion-webui-docker - Easy Docker setup for Stable Diffusion with user-friendly UI
GPTQ-for-LLaMa - 4 bits quantization of LLaMA using GPTQ
alpaca-lora - Instruct-tune LLaMA on consumer hardware
koboldcpp - A simple one-file way to run various GGML and GGUF models with KoboldAI's UI