ggml
GLM-130B
ggml | GLM-130B | |
---|---|---|
69 | 19 | |
9,725 | 7,610 | |
- | 0.3% | |
9.8 | 4.8 | |
5 days ago | 9 months ago | |
C | Python | |
MIT License | Apache License 2.0 |
Stars - the number of stars that a project has on GitHub. Growth - month over month growth in stars.
Activity is a relative number indicating how actively a project is being developed. Recent commits have higher weight than older ones.
For example, an activity of 9.0 indicates that a project is amongst the top 10% of the most actively developed projects that we are tracking.
ggml
-
LLMs on your local Computer (Part 1)
git clone https://github.com/ggerganov/ggml cd ggml mkdir build cd build cmake .. make -j4 gpt-j ../examples/gpt-j/download-ggml-model.sh 6B
-
GGUF, the Long Way Around
Cool. I was just learning about GGUF by creating my own parser for it based on the spec https://github.com/ggerganov/ggml/blob/master/docs/gguf.md (for educational purposes)
-
Ask HN: People who switched from GPT to their own models. How was it?
If you don't care about the details of how those model servers work, then something that abstracts out the whole process like LM Studio or Ollama is all you need.
However, if you want to get into the weeds of how this actually works, I recommend you look up model quantization and some libraries like ggml[1] that actually do that for you.
[1] https://github.com/ggerganov/ggml
- GGUF File Format
-
Google just shipped libggml from llama-cpp into its Android AICore
Because the library is called ggml, but it supports gguf.
-
Q-Transformer
Apparently this guy like a bunch of others like https://github.com/ggerganov/ggml are implementing transformers from papers for people that want them. Pretty cool.
-
[P] Inference Vision Transformer (ViT) in plain C/C++ with ggml
You can access it here: https://github.com/staghado/vit.cpp It has been added to the ggml library on GitHub: https://github.com/ggerganov/ggml
-
Falcon 180B Released
https://github.com/ggerganov/ggml
One note is that prompt ingestion is extremely slow on CPU compared to GPU. So short prompts are fine (as tokens can be streamed once the prompt is ingested), but long prompts feel extremely sluggish.
-
Stable Diffusion in pure C/C++
I did a quick run under profiler and on my AVX2-laptop the slowest part (>50%) was matrix multiplication (sgemm).
In current version of GGML if OpenBLAS is enabled, they convert matrices to FP32 before running sgemm.
If OpenBLAS is disabled, on AVX2 plaftorm they convert FP16 to FP32 on every FMA operation, which even worse (due to repetition). After that, both ggml_vec_dot_f16 and ggml_vec_dot_f32 took first place in profiler.
Source: https://github.com/ggerganov/ggml/blob/master/src/ggml.c#L10...
-
Accessing Llama 2 from the command-line with the LLM-replicate plugin
For those getting started, the easiest one click installer I've used is Nomic.ai's gpt4all: https://gpt4all.io/
This runs with a simple GUI on Windows/Mac/Linux, leverages a fork of llama.cpp on the backend and supports GPU acceleration, and LLaMA, Falcon, MPT, and GPT-J models. It also has API/CLI bindings.
I just saw a slick new tool https://ollama.ai/ that will let you install a llama2-7b with a single `ollama run llama2` command that has a very simple 1-click installer for Apple Silicon Mac (but need to build from source for anything else atm). It looks like it only supports llamas OOTB but it also seems to use llama.cpp (via Go adapter) on the backend - it seemed to be CPU-only on my MBA, but I didn't poke too much and it's brand new, so we'll see.
For anyone on HN, they should probably be looking at https://github.com/ggerganov/llama.cpp and https://github.com/ggerganov/ggml directly. If you have a high-end Nvidia consumer card (3090/4090) I'd highly recommend looking into https://github.com/turboderp/exllama
For those generally confused, the r/LocalLLaMA wiki is a good place to start: https://www.reddit.com/r/LocalLLaMA/wiki/guide/
I've also been porting my own notes into a single location that tracks models, evals, and has guides focused on local models: https://llm-tracker.info/
GLM-130B
-
GLM-130B
The https://github.com/THUDM/GLM-130B model is trained on The Pile and can run on 4x3090 when quantized to INT4. I'm wondering if anyone knows if this model could (or has) been quantized using GPTQ, which gives some impressive performance gains over traditional quantization, and I'm also wondering if anyone has tried a 3-bit or 2-bit quantization of such a massive model (using GPTQ). Are there any inherent limitations in this? Is there anything about this model that prevents it from being run on text-generation-webui?
- Has anyone tried GLM?
- Ask HN: Open source LLM for commercial use?
- Whichever way I look at it, I just don’t see this being the case. Why do you agree/disagree?
-
The New Bing and ChatGPT
> GLM-130B, a model comparable with GPT-3, has 130 billion parameters in FP16 precision, a total of 260G of GPU memory is required to store model weights. The DGX-A100 server has 8 A100s and provides an amount of 320G of GPU memory (640G for 80G A100 version) so it suits GLM-130B well.
https://github.com/THUDM/GLM-130B/blob/main/docs/low-resourc...
-
OpenAI Major Outage
GLM-130B[1] (a 130 billion parameter model vs GPT-3's 175 billion parameter model) is able to run optimally on consumer level high-end hardware, 4xRTX 3090 in particular. That's < $4k at current prices, and as hardware prices go one can only imagine what it'll be in a year or two. It also enables running with degraded performance on lesser systems.
It's a whole lot cheaper to run neural net style systems than to train them. "Somebody on Twitter"[2] got it setup, and broke down the costs, demonstrated some prompts, and what not. Cliff notes being a fraction of a penny per query, with each taking about 16s to generate. The output's pretty terrible, but it's unclear to me whether that's inherent or a result of priority. I expect OpenAI spent a lot of manpower on supervised training, whereas this system probably had minimal, especially in English (it's from a Chinese university).
[1] - https://github.com/THUDM/GLM-130B
[2] - https://twitter.com/alexjc/status/1617152800571416577
- [D]Are there any known AI systems today that are significantly more advanced than chatGPT ?
-
Will there ever be a "Stable Diffusion chat AI" that we can run at home like one can do with Stable Diffusion? A "roll-your-own at home ChatGPT"?
GLM-130B in 4 bit mode is better than GPT3 and can run on 4 RTX-3090s. Still expensive but it’s getting closer. https://github.com/THUDM/GLM-130B
- Open-Source competitor to OpenAI?
-
Ask HN: Can you crowdfund the compute for GPT?
https://github.com/THUDM/GLM-130B might be a useful place to look
What are some alternatives?
llama.cpp - LLM inference in C/C++
PaLM-rlhf-pytorch - Implementation of RLHF (Reinforcement Learning with Human Feedback) on top of the PaLM architecture. Basically ChatGPT but with PaLM
alpaca.cpp - Locally run an Instruction-Tuned Chat-Style LLM
petals - 🌸 Run LLMs at home, BitTorrent-style. Fine-tuning and inference up to 10x faster than offloading
alpaca-lora - Instruct-tune LLaMA on consumer hardware
Open-Assistant - OpenAssistant is a chat-based assistant that understands tasks, can interact with third-party systems, and retrieve information dynamically to do so.
mlc-llm - Enable everyone to develop, optimize and deploy AI models natively on everyone's devices.
lm-human-preferences - Code for the paper Fine-Tuning Language Models from Human Preferences
text-generation-webui - A Gradio web UI for Large Language Models. Supports transformers, GPTQ, AWQ, EXL2, llama.cpp (GGUF), Llama models.
hivemind - Decentralized deep learning in PyTorch. Built to train models on thousands of volunteers across the world.
llm - An ecosystem of Rust libraries for working with large language models
metaseq - Repo for external large-scale work