rocBLAS
llama.cpp
rocBLAS | llama.cpp | |
---|---|---|
6 | 773 | |
317 | 57,463 | |
2.8% | - | |
9.7 | 10.0 | |
5 days ago | about 13 hours ago | |
C++ | C++ | |
GNU General Public License v3.0 or later | MIT License |
Stars - the number of stars that a project has on GitHub. Growth - month over month growth in stars.
Activity is a relative number indicating how actively a project is being developed. Recent commits have higher weight than older ones.
For example, an activity of 9.0 indicates that a project is amongst the top 10% of the most actively developed projects that we are tracking.
rocBLAS
-
Nvidia DGX GH200: The First 100 Terabyte GPU Memory System
The same is also true for https://github.com/ROCmSoftwarePlatform/rocBLAS and https://github.com/ROCmSoftwarePlatform/hipBLASLt although the build stack, distribution— leaves a lot to be desired, and otherwise quite unstable.
-
Whisper.cpp v1.4.0
Full circle eh. I wonder how well it compares to just trying to use the actual Whisper models on a variety of existing Gpu capable bigger frameworks.
I don't know much practically about how hard it would be to take the Whisper PyTorch (1 or 2?) trained models & to make good use of them elsewhere. I expect Whisper.cpp probably better caters to users, is more readily consumable.
Fwiw, Whisper.cpp uses Nvidia's cuBLAS. There does appear to be an AMD rocm port. https://github.com/ROCmSoftwarePlatform/rocBLAS
-
which CPU to choose?
It's not what you asked, but I felt I should point out that rocBLAS is no longer maintained for gfx803 (the architecture of the RX 570) and PyTorch depends on rocBLAS. PyTorch will work at least to some extent, but there are known bugs that may never be fixed. I've been trying to change this, but that's how things are right now.
-
Trying to get Pytorch ROCm to work on Ubuntu 20.04 with Fiji cards
The last release that officially supported gfx803 was ROCm 3.5. All testing on that hardware ceased shortly after said release, and the code paths for that architecture have been unmaintained for nearly two years. For a specific example of a problem you may encounter, see: https://github.com/ROCmSoftwarePlatform/rocBLAS/issues/1218
- Compute Ecosystem of AMD GPUs
-
PyTorch 1.8 adds AMD ROCm support
Although the code is still there, support for (slightly) older devices are already suffering from lack of maintainence and bugs. For instance there's a bug causing gfx803 devices to produce wrong outputs starting from mid-2020, and I'm pretty sure they're never gonna fix it.
llama.cpp
-
Better and Faster Large Language Models via Multi-Token Prediction
For anyone interested in exploring this, llama.cpp has an example implementation here:
https://github.com/ggerganov/llama.cpp/tree/master/examples/...
- Llama.cpp Bfloat16 Support
-
Fine-tune your first large language model (LLM) with LoRA, llama.cpp, and KitOps in 5 easy steps
Getting started with LLMs can be intimidating. In this tutorial we will show you how to fine-tune a large language model using LoRA, facilitated by tools like llama.cpp and KitOps.
- GGML Flash Attention support merged into llama.cpp
-
Phi-3 Weights Released
well https://github.com/ggerganov/llama.cpp/issues/6849
- Lossless Acceleration of LLM via Adaptive N-Gram Parallel Decoding
- Llama.cpp Working on Support for Llama3
-
Embeddings are a good starting point for the AI curious app developer
Have just done this recently for local chat with pdf feature in https://recurse.chat. (It's a macOS app that has built-in llama.cpp server and local vector database)
Running an embedding server locally is pretty straightforward:
- Get llama.cpp release binary: https://github.com/ggerganov/llama.cpp/releases
- Mixtral 8x22B
- Llama.cpp: Improve CPU prompt eval speed
What are some alternatives?
kokkos-kernels - Kokkos C++ Performance Portability Programming Ecosystem: Math Kernels - Provides BLAS, Sparse BLAS and Graph Kernels
ollama - Get up and running with Llama 3, Mistral, Gemma, and other large language models.
ROCm - AMD ROCmâ„¢ Software - GitHub Home [Moved to: https://github.com/ROCm/ROCm]
gpt4all - gpt4all: run open-source LLMs anywhere
HIP-CPU - An implementation of HIP that works on CPUs, across OSes.
text-generation-webui - A Gradio web UI for Large Language Models. Supports transformers, GPTQ, AWQ, EXL2, llama.cpp (GGUF), Llama models.
AdaptiveCpp - Implementation of SYCL and C++ standard parallelism for CPUs and GPUs from all vendors: The independent, community-driven compiler for C++-based heterogeneous programming models. Lets applications adapt themselves to all the hardware in the system - even at runtime!
GPTQ-for-LLaMa - 4 bits quantization of LLaMA using GPTQ
Pytorch - Tensors and Dynamic neural networks in Python with strong GPU acceleration
ggml - Tensor library for machine learning
hipBLASLt - hipBLASLt is a library that provides general matrix-matrix operations with a flexible API and extends functionalities beyond a traditional BLAS library
alpaca.cpp - Locally run an Instruction-Tuned Chat-Style LLM