8cc | llama.cpp | |
---|---|---|
6 | 775 | |
6,041 | 57,463 | |
- | - | |
0.0 | 10.0 | |
11 months ago | 3 days ago | |
C | C++ | |
MIT License | MIT License |
Stars - the number of stars that a project has on GitHub. Growth - month over month growth in stars.
Activity is a relative number indicating how actively a project is being developed. Recent commits have higher weight than older ones.
For example, an activity of 9.0 indicates that a project is amongst the top 10% of the most actively developed projects that we are tracking.
8cc
- Apple hiring compiler developers for improving Swift / C++ interoperability
-
lambda-8cc: An x86 C compiler written in untyped lambda calculus
The compiler looks to be here: https://github.com/rui314/8cc
- C meeting is over. C23 added
-
Compiler career advice?
Implement a compiler for a subset of C. This doesn't need to be self-hosting, but bonus points if it is. Here's an example of what it can look like: https://github.com/rui314/8cc
-
NCC, a new ANSI/ISO C compiler
While this is an impressive work, I feel that there are a lot of "tiny" C compilers out there; how is yours any different than SmallerC, TinyC, 8cc, chibicc and many others?
-
Linus Torvalds on where Rust will fit into Linux
See https://github.com/rui314/8cc https://github.com/rswier/c4 for a demonstration of this.
llama.cpp
-
Ask HN: Affordable hardware for running local large language models?
Yes, Metal seems to allow a maximum of 1/2 of the RAM for one process, and 3/4 of the RAM allocated to the GPU overall. There’s a kernel hack to fix it, but that comes with the usual system integrity caveats. https://github.com/ggerganov/llama.cpp/discussions/2182
- Xmake: A modern C/C++ build tool
-
Better and Faster Large Language Models via Multi-Token Prediction
For anyone interested in exploring this, llama.cpp has an example implementation here:
https://github.com/ggerganov/llama.cpp/tree/master/examples/...
- Llama.cpp Bfloat16 Support
-
Fine-tune your first large language model (LLM) with LoRA, llama.cpp, and KitOps in 5 easy steps
Getting started with LLMs can be intimidating. In this tutorial we will show you how to fine-tune a large language model using LoRA, facilitated by tools like llama.cpp and KitOps.
- GGML Flash Attention support merged into llama.cpp
-
Phi-3 Weights Released
well https://github.com/ggerganov/llama.cpp/issues/6849
- Lossless Acceleration of LLM via Adaptive N-Gram Parallel Decoding
- Llama.cpp Working on Support for Llama3
-
Embeddings are a good starting point for the AI curious app developer
Have just done this recently for local chat with pdf feature in https://recurse.chat. (It's a macOS app that has built-in llama.cpp server and local vector database)
Running an embedding server locally is pretty straightforward:
- Get llama.cpp release binary: https://github.com/ggerganov/llama.cpp/releases
What are some alternatives?
chibicc - A small C compiler
ollama - Get up and running with Llama 3, Mistral, Gemma, and other large language models.
SmallerC - Simple C compiler
gpt4all - gpt4all: run open-source LLMs anywhere
ncc - classic (K&R) C compiler for AMD64
text-generation-webui - A Gradio web UI for Large Language Models. Supports transformers, GPTQ, AWQ, EXL2, llama.cpp (GGUF), Llama models.
Ne10 - An open optimized software library project for the ARM® Architecture
GPTQ-for-LLaMa - 4 bits quantization of LLaMA using GPTQ
rust - Empowering everyone to build reliable and efficient software.
ggml - Tensor library for machine learning
rust - Rust for the xtensa architecture. Built in targets for the ESP32 and ESP8266
alpaca.cpp - Locally run an Instruction-Tuned Chat-Style LLM