tinygrad
llama.cpp
Our great sponsors
tinygrad | llama.cpp | |
---|---|---|
17 | 766 | |
23,711 | 55,117 | |
5.2% | - | |
9.9 | 9.9 | |
3 days ago | 6 days ago | |
Python | C++ | |
MIT License | MIT License |
Stars - the number of stars that a project has on GitHub. Growth - month over month growth in stars.
Activity is a relative number indicating how actively a project is being developed. Recent commits have higher weight than older ones.
For example, an activity of 9.0 indicates that a project is amongst the top 10% of the most actively developed projects that we are tracking.
tinygrad
-
AMD Unveils Ryzen 8000G Series Processors: Zen 4 APUs for Desktop with Ryzen AI
Not sure if I completely understand what "Ryzen AI" does, but Tinygrad for example has some limited support for RDNA3[0]. It isn't quite there yet in matters of performance though, as you can read in the comments of that file.
There's also a small tutorial by AMD on how to use the WMMA intrinsic[1] using AMD's hipcc[2] compiler. Documentation is sparse kinda sparse, but the instruction set is not huge. The RDNA3 ISA guide[3] might also be helpful (and only a fraction of the pages are relevant.)
0. https://github.com/tinygrad/tinygrad/blob/master/extra/gemm/...
1. https://gpuopen.com/learn/wmma_on_rdna3/
2. https://github.com/ROCm/HIPCC
3. https://www.amd.com/content/dam/amd/en/documents/radeon-tech...
- Tinygrad 0.8.0 Release
-
Beyond Backpropagation - Higher Order, Forward and Reverse-mode Automatic Differentiation for Tensorken
This post describes how I added automatic differentiation to Tensorken. Tensorken is my attempt to build a fully featured yet easy-to-understand and hackable implementation of a deep learning library in Rust. It takes inspiration from the likes of PyTorch, Tinygrad, and JAX.
-
[D] What is a good way to maintain code readability and code quality while scaling up complexity in libraries like Hugging Face?
what do you think about tinygrad? I think its a good example of growing and well written, (partially) well documented library with many close to reference implementations
-
AMD MI300 Performance – Faster Than H100, but How Much?
The idea of model architecture making fast hardware design easier is what makes https://github.com/tinygrad/tinygrad so interesting.
-
💻 7 Open-Source DevTools That Save Time You Didn't Know to Exist ⌛🚀
🌟 Support on GitHub Website: https://tinygrad.org/
- Tinygrad
-
How to train an Iris dataset classifier with Tinygrad
Before we begin, make sure you have TinyGrad and the required dependencies installed. You can find the installation instructions here.
-
Decomposing Language Models into Understandable Components
Try to get something like tinygrad[1] running locally, that way you can tweak things a bit run it again and see how it performs. While doing this you'll pick up most of the concepts and get a feeling of how things work. Also, take a look at projects like llama.cpp[2], you don't have to fully understand what's going on here, tho.
You may need some intermediate knowledge of linear algebra and this thing called "data science" nowadays, which is pretty much knowing how to mangle data and visualize it.
Try creating a small model on your own, it doesn't have to be super fancy just make sure it does something you want it to do. And then ... you'll probably could go on your own then.
- Tinygrad 0.7.0
llama.cpp
- Llama.cpp Working on Support for Llama3
-
Embeddings are a good starting point for the AI curious app developer
Have just done this recently for local chat with pdf feature in https://recurse.chat. (It's a macOS app that has built-in llama.cpp server and local vector database)
Running an embedding server locally is pretty straightforward:
- Get llama.cpp release binary: https://github.com/ggerganov/llama.cpp/releases
- Mixtral 8x22B
- Llama.cpp: Improve CPU prompt eval speed
-
Ollama 0.1.32: WizardLM 2, Mixtral 8x22B, macOS CPU/GPU model split
Ah, thanks for this! I can't edit my parent comment that you replied to any longer unfortunately.
As I said, I only compared the contributors graphs [0] and checked for overlaps. But those apparently only go back about year and only list at most 100 contributors ranked by number of commits.
[0]: https://github.com/ollama/ollama/graphs/contributors and https://github.com/ggerganov/llama.cpp/graphs/contributors
-
KodiBot - Local Chatbot App for Desktop
KodiBot is a desktop app that enables users to run their own AI chat assistants locally and offline on Windows, Mac, and Linux operating systems. KodiBot is a standalone app and does not require an internet connection or additional dependencies to run local chat assistants. It supports both Llama.cpp compatible models and OpenAI API.
-
Mixture-of-Depths: Dynamically allocating compute in transformers
There are already some implementations out there which attempt to accomplish this!
Here's an example: https://github.com/silphendio/sliced_llama
A gist pertaining to said example: https://gist.github.com/silphendio/535cd9c1821aa1290aa10d587...
Here's a discussion about integrating this capability with ExLlama: https://github.com/turboderp/exllamav2/pull/275
And same as above but for llama.cpp: https://github.com/ggerganov/llama.cpp/issues/4718#issuecomm...
-
The lifecycle of a code AI completion
For those who might not be aware of this, there is also an open source project on GitHub called "Twinny" which is an offline Visual Studio Code plugin equivalent to Copilot: https://github.com/rjmacarthy/twinny
It can be used with a number of local model services. Currently for my setup on a NVIDIA 4090, I'm running both the base and instruct model for deepseek-coder 6.7b using 5_K_M Quantization GGUF files (for performance) through llama.cpp "server" where the base model is for completions and the instruct model for chat interactions.
llama.cpp: https://github.com/ggerganov/llama.cpp/
deepseek-coder 6.7b base GGUF files: https://huggingface.co/TheBloke/deepseek-coder-6.7B-base-GGU...
deepseek-coder 6.7b instruct GGUF files: https://huggingface.co/TheBloke/deepseek-coder-6.7B-instruct...
-
More Agents Is All You Need: LLMs performance scales with the number of agents
If I'm reading this correctly, they had to discard Llama 2 answers and only use GPT-3.5 given answers to test the hypothesis.
GPT-3.5 answering questions through the OAI API alone is not an acceptable method of testing problem solving ability across a range of temperatures. OpenAI does some blackbox wizardry on their end.
There are many complex and clever sampling techniques for which temperature is just one (possibly dynamic) component
One example from the llama.cpp codebase is dynamic temperature sampling
https://github.com/ggerganov/llama.cpp/pull/4972/files
Not sure what you mean by whole model state given that there are tens of thousands of possible tokens and the models have billions of parameters in XX,XXX-dimensional space. How many queries across how many sampling methods might you need? Err..how much time? :)
-
Hosting Your Own AI Chatbot on Android Devices
git clone https://github.com/ggerganov/llama.cpp.git
What are some alternatives?
Pytorch - Tensors and Dynamic neural networks in Python with strong GPU acceleration
ollama - Get up and running with Llama 2, Mistral, Gemma, and other large language models.
jax - Composable transformations of Python+NumPy programs: differentiate, vectorize, JIT to GPU/TPU, and more
gpt4all - gpt4all: run open-source LLMs anywhere
llama - Inference code for Llama models
text-generation-webui - A Gradio web UI for Large Language Models. Supports transformers, GPTQ, AWQ, EXL2, llama.cpp (GGUF), Llama models.
openpilot - openpilot is an open source driver assistance system. openpilot performs the functions of Automated Lane Centering and Adaptive Cruise Control for 250+ supported car makes and models.
GPTQ-for-LLaMa - 4 bits quantization of LLaMA using GPTQ
tensorflow_macos - TensorFlow for macOS 11.0+ accelerated using Apple's ML Compute framework.
ggml - Tensor library for machine learning
stable-diffusion.cpp - Stable Diffusion in pure C/C++
alpaca.cpp - Locally run an Instruction-Tuned Chat-Style LLM