cformers
llama.cpp
cformers | llama.cpp | |
---|---|---|
4 | 782 | |
315 | 58,425 | |
0.6% | - | |
6.7 | 10.0 | |
5 months ago | 3 days ago | |
C | C++ | |
MIT License | MIT License |
Stars - the number of stars that a project has on GitHub. Growth - month over month growth in stars.
Activity is a relative number indicating how actively a project is being developed. Recent commits have higher weight than older ones.
For example, an activity of 9.0 indicates that a project is amongst the top 10% of the most actively developed projects that we are tracking.
cformers
-
[P] rwkv.cpp: FP16 & INT4 inference on CPU for RWKV language model
it's a combination of things, and removing python from the loop isn't essential to achieving most of these performance gains. the main trick is quantizing the weights and compiling the model. concrete example that builds on top of ggml with python APIs: https://github.com/NolanoOrg/cformers
- Cformers 🚀 - "Transformers with a C-backend for lightning-fast CPU inference". | Nolano
-
FauxPilot – an open-source GitHub Copilot server
We will add quantized CodeGen for fast inference on CPUs up on cformers (https://github.com/NolanoOrg/cformers/) by later today.
llama.cpp
-
IBM Granite: A Family of Open Foundation Models for Code Intelligence
if you can compile stuff, then looking at llama.cpp (what ollama uses) is also interesting: https://github.com/ggerganov/llama.cpp
the server is here: https://github.com/ggerganov/llama.cpp/tree/master/examples/...
And you can search for any GGUF on huggingface
-
Ask HN: Affordable hardware for running local large language models?
Yes, Metal seems to allow a maximum of 1/2 of the RAM for one process, and 3/4 of the RAM allocated to the GPU overall. There’s a kernel hack to fix it, but that comes with the usual system integrity caveats. https://github.com/ggerganov/llama.cpp/discussions/2182
- Xmake: A modern C/C++ build tool
-
Better and Faster Large Language Models via Multi-Token Prediction
For anyone interested in exploring this, llama.cpp has an example implementation here:
https://github.com/ggerganov/llama.cpp/tree/master/examples/...
- Llama.cpp Bfloat16 Support
-
Fine-tune your first large language model (LLM) with LoRA, llama.cpp, and KitOps in 5 easy steps
Getting started with LLMs can be intimidating. In this tutorial we will show you how to fine-tune a large language model using LoRA, facilitated by tools like llama.cpp and KitOps.
- GGML Flash Attention support merged into llama.cpp
-
Phi-3 Weights Released
well https://github.com/ggerganov/llama.cpp/issues/6849
- Lossless Acceleration of LLM via Adaptive N-Gram Parallel Decoding
- Llama.cpp Working on Support for Llama3
What are some alternatives?
alpaca.cpp - Locally run an Instruction-Tuned Chat-Style LLM
ollama - Get up and running with Llama 3, Mistral, Gemma, and other large language models.
RWKV-LM - RWKV is an RNN with transformer-level LLM performance. It can be directly trained like a GPT (parallelizable). So it's combining the best of RNN and transformer - great performance, fast inference, saves VRAM, fast training, "infinite" ctx_len, and free sentence embedding.
gpt4all - gpt4all: run open-source LLMs anywhere
CodeGen - CodeGen is a family of open-source model for program synthesis. Trained on TPU-v4. Competitive with OpenAI Codex.
text-generation-webui - A Gradio web UI for Large Language Models. Supports transformers, GPTQ, AWQ, EXL2, llama.cpp (GGUF), Llama models.
rwkv.cpp - INT4/INT5/INT8 and FP16 inference on CPU for RWKV language model
GPTQ-for-LLaMa - 4 bits quantization of LLaMA using GPTQ
llm - An ecosystem of Rust libraries for working with large language models
ggml - Tensor library for machine learning
gpt4all.cpp - Locally run an Assistant-Tuned Chat-Style LLM