llama.cpp
text-generation-webui
Our great sponsors
llama.cpp | text-generation-webui | |
---|---|---|
744 | 875 | |
53,471 | 34,683 | |
- | - | |
9.9 | 9.9 | |
5 days ago | 6 days ago | |
C++ | Python | |
MIT License | GNU Affero General Public License v3.0 |
Stars - the number of stars that a project has on GitHub. Growth - month over month growth in stars.
Activity is a relative number indicating how actively a project is being developed. Recent commits have higher weight than older ones.
For example, an activity of 9.0 indicates that a project is amongst the top 10% of the most actively developed projects that we are tracking.
llama.cpp
-
"The king is dead"–Claude 3 surpasses GPT-4 on Chatbot Arena
git clone https://github.com/ggerganov/llama.cpp
-
LLMs on your local Computer (Part 1)
git clone --depth=1 https://github.com/ggerganov/llama.cpp cd llama.cpp mkdir build cd build cmake .. cmake --build . --config Release wget -c --show-progress -o models/llama-2-13b.Q4_0.gguf https://huggingface.co/TheBloke/Llama-2-13B-GGUF/resolve/main/llama-2-13b.Q4_0.gguf?download=true
-
Show HN: Tech Jobs on the Command Line
I'm using https://github.com/ggerganov/llama.cpp and currently mistral 7b (on a m1 macbook pro). I'm sure with some prompt examples you can get pretty good results on a smaller model.
At the moment I don't have it open sourced due to it being part of a larger project that I'm working on that contains tailwindui licensed components.
A cool feature that I'm working on is creating a firefox plugin so you can save/index job postings from other sites and extract out meta information via an LLM. Very similar to this chrome plugin.
-
GGUF, the Long Way Around
Thank you for the reference to the CUDA file [1]. It's always nice to see how complex data structures are handled in GPUs. Does anyone have any idea what the bit patterns are for (starting at line 1529)?
[1] https://github.com/ggerganov/llama.cpp/blob/master/ggml-cuda...
-
The Era of 1-bit LLMs: ternary parameters for cost-effective computing
It does result in a significant degradation relative to unquantized model of the same size, but even with simple llama.cpp K-quantization, it's still worth it all the way down to 2-bit. The chart in this llama.cpp PR speaks for itself:
https://github.com/ggerganov/llama.cpp/pull/1684#issue-17396...
-
Gemma: New Open Models
It should be possible to run it via llama.cpp[0] now.
-
Ollama is now available on Windows in preview
If you just check out https://github.com/ggerganov/llama.cpp and run make, you’ll wind up with an executable called ‘main’ that lets you run any gguf language model you choose. Then:
./main -m ./models/30B/llama-30b.Q4_K_M.gguf --prompt “say hello”
On my M2 MacBook, the first run takes a few seconds before it produces anything, but after that subsequent runs start outputting tokens immediately.
You can run LLM models right inside a short lived process.
But the majority of humans don’t want to use a single execution of a command line to access LLM completions. They want to run a program that lets them interact with an LLM. And to do that they will likely start and leave running a long-lived process with UI state - which can also serve as a host for a longer lived LLM context.
Neither usecase particularly seems to need a server to function. My curiosity about why people are packaging these things up like that is completely genuine.
-
UC Berkley: World Model on Million-Length Video and Language with RingAttention
https://github.com/ggerganov/llama.cpp/discussions/2948
You can run ollama (and a web UI) pretty trivially via docker:
docker run -d --gpus=all -v /some/dir/for/ollama/data:/root/.ollama -p 11434:11434 --name ollama ollama/ollama:latest
- FLaNK Stack Weekly 12 February 2024
-
Ask HN: Are there any reliable benchmarks for Machine Learning Model Serving?
Not exactly what you’re looking fir, but oerhaps you’ll find it useful - llama benchmarked on all M-series chips, and in comments there are comparisons with nvidia.
text-generation-webui
-
Ask HN: How to get started with local language models?
You can use webui https://github.com/oobabooga/text-generation-webui
Once you get a version up and running I make a copy before I update it as several times updates have broken my working version and caused headaches.
a decent explanation of parameters outside of reading archive papers: https://github.com/oobabooga/text-generation-webui/wiki/03-%...
a news ai website:
-
text-generation-webui VS LibreChat - a user suggested alternative
2 projects | 29 Feb 2024
- Show HN: I made an app to use local AI as daily driver
-
Ask HN: People who switched from GPT to their own models. How was it?
The other answers are recommending paths which give you #1. less control and #2. projects with smaller eco-systems.
If you want a truly general purpose front-end for LLMs, the only good solution right now is oobabooga: https://github.com/oobabooga/text-generation-webui
All other alternatives have only small fractions of the features that oobabooga supports. All other alternatives only support a fraction of the LLM backends that oobabooga supports, etc.
-
Nvidia's Chat with RTX is a promising AI chatbot that runs locally on your PC
> Downloading text-generation-webui takes a minute, let's you use any model and get going.
What you're missing here is you're already in this area deep enough to know what ooogoababagababa text-generation-webui is. Let's back out to the "average Windows desktop user" level. Assuming they even know how to find it:
1) Go to https://github.com/oobabooga/text-generation-webui?tab=readm...
2) See a bunch of instructions opening a terminal window and running random batch/powershell scripts. Powershell, etc will likely prompt you with a scary warning. Then you start wondering who ooobabagagagaba is...
3) Assuming you get this far (many users won't even get to step 1) you're greeted with a web interface[0] FILLED to the brim with technical jargon and extremely overwhelming options just to get a model loaded, which is another mind warp because you get to try to select between a bunch of random models with no clear meaning and non-sensical/joke sounding names from someone called "TheBloke". Ok...
Let's say you somehow braved this gauntlet and get this far now you get to chat with it. Ok, what about my local documents? text-generation-webui itself has nothing for that. Repeat this process over the 10 random open source projects from a bunch of names you've never heard of in an attempt to accomplish that.
This is "I saw this thing from Nvidia explode all over media, twitter, youtube, etc. I downloaded it from Nvidia, double-clicked, pointed it at a folder with documents, and it works".
That's the difference and it's very significant.
[0] - https://raw.githubusercontent.com/oobabooga/screenshots/main...
-
Meta AI releases Code Llama 70B
You can download it and run it with [this](https://github.com/oobabooga/text-generation-webui). There's an API mode that you could leverage from your VS Code extension.
-
Ollama Python and JavaScript Libraries
Same question here. Ollama is fantastic as it makes it very easy to run models locally, But if you already have a lot of code that processes OpenAI API responses (with retry, streaming, async, caching etc), it would be nice to be able to simply switch the API client to Ollama, without having to have a whole other branch of code that handles Alama API responses. One way to do an easy switch is using the litellm library as a go-between but it’s not ideal (and I also recently found issues with their chat formatting for mistral models).
For an OpenAI compatible API my current favorite method is to spin up models using oobabooga TGW. Your OpenAI API code then works seamlessly by simply switching out the api_base to the ooba endpoint. Regarding chat formatting, even ooba’s Mistral formatting has issues[1] so I am doing my own in Langroid using HuggingFace tokenizer.apply_chat_template [2]
[1] https://github.com/oobabooga/text-generation-webui/issues/53...
[2] https://github.com/langroid/langroid/blob/main/langroid/lang...
Related question - I assume ollama auto detects and applies the right chat formatting template for a model?
-
Ask HN: Is it feasible to train my own LLM?
https://github.com/oobabooga/text-generation-webui/blob/main...
Consider a finetune - they're faster and relatively cheap (like, under $30 rented compute time). The link above lists them, but the steps are to gather a dataset, do the training, and evaluate your results. LLMs are about instruction/evaluation, so it's easy to show results, measure perplexity, and compare against the base model.
If you're interested in a building a limited dataset, fun ideas might be quotes or conversations from your classmates, lessons or syllabi from your program, or other specific, local, testable information. Datasets aren't plug and play, and they're the most important part of a model.
However, even using the same dataset can yield different results based on training parameters. I'd keep it simple and either make the test about the impact of differences in training parameters using a single dataset, or pick two already created datasets and train using the same parameters for comparison.
Good luck in IB! I was in it until I moved cities, and it was a blast.
- AirLLM enables 8GB MacBook run 70B LLM
-
Role-playing with AI will be a powerful tool for writers and educators
Right, sorry I forgot to add you can override the url with `OPENAI_API_BASE` and point it to a text-generation-ui OpenAI API[0] compliant model.
0: https://github.com/oobabooga/text-generation-webui/discussio...
What are some alternatives?
ollama - Get up and running with Llama 2, Mistral, Gemma, and other large language models.
KoboldAI
gpt4all - gpt4all: run open-source LLMs anywhere
TavernAI - Atmospheric adventure chat for AI language models (KoboldAI, NovelAI, Pygmalion, OpenAI chatgpt, gpt-4)
GPTQ-for-LLaMa - 4 bits quantization of LLaMA using GPTQ
KoboldAI-Client
ggml - Tensor library for machine learning
alpaca-lora - Instruct-tune LLaMA on consumer hardware
alpaca.cpp - Locally run an Instruction-Tuned Chat-Style LLM