nvidia-patch
llama.cpp
nvidia-patch | llama.cpp | |
---|---|---|
309 | 790 | |
3,037 | 59,389 | |
- | - | |
8.4 | 10.0 | |
7 days ago | 3 days ago | |
Python | C++ | |
- | MIT License |
Stars - the number of stars that a project has on GitHub. Growth - month over month growth in stars.
Activity is a relative number indicating how actively a project is being developed. Recent commits have higher weight than older ones.
For example, an activity of 9.0 indicates that a project is amongst the top 10% of the most actively developed projects that we are tracking.
nvidia-patch
-
Do I need to have a beefy PC to transcode 4k? Or can I just buy my brother an Nvidia shield pro and setup a cheap server on my end?
This can be patched out. https://github.com/keylase/nvidia-patch
-
Transcoding 4K HDR tone mapping
NVIDIA Corporation GA106 [GeForce RTX 3060] and I applied the patch here https://github.com/keylase/nvidia-patch
-
Linux 6.6 to Protect Against Illicit Behavior of Nvidia Proprietary Driver
> CUDA, and pretty much all optimization(hacks) done to run games better
And arbitrary limitations implemented at the driver level to force you to purchase their enterprise GPUs, see https://github.com/keylase/nvidia-patch#nvenc-and-nvfbc-patc...
-
GPU Guide (For AI Use-Cases)
Nvidia has no motivation to make a consumer card with lots of VRAM, that's basically the only (relevant) separator between the GeForce family and the Quadro lineup.
There are restrictions on NVENC streams with consumer cards, but that has been a solved problem for a while [0].
If they were to make a consumer card with more VRAM, it would immediately undercut their own Quadro/Tesla lineup, which cost substantially more. I don't see a reason for them to do it.
0: https://github.com/keylase/nvidia-patch
-
Can't hardware transcode mor than 5 at a time even after all the required changes
I have never had to do the session limit bump thing from the last link. I have a 3090 as well and simply did the initial unlock, which worked fine. I would reinstall fresh drivers from Nvidia, making sure you install the newest one that is supported by the unlock tool (536.40 as of this post, the GitHub for the patch has links to the drivers - https://github.com/keylase/nvidia-patch/tree/master/win)
- Can you flash any consumer version Nvidia card to remove the streaming limits?
-
Can my GPU transcode?
Aren't these Quadro versions. The patch here. https://github.com/keylase/nvidia-patch supports Quadro versions of you click on the win clickable.
-
Let's have a talk - Guide to Choosing the Best Plex Server for You
Second, the GPU. The GPU is probably as important as the CPU, and in some cases more important, and when we talk about GPUs we will primarily talk about Nvidia GPUs as they are officialy supported by the Plex team. NVIDIA GPUs are important for Plex hardware transcoding due to their dedicated video encoding/decoding units, superior performance, wide codec support, improved video quality, reduced CPU load, power efficiency. They offer a powerful hardware acceleration solution that can greatly enhance the transcoding capabilities of a Plex server. It's also important to note that Nvidia GPUs require a patch to unlock the number of HW transcoding streams. Dedicated GPUs are large pieces of hardware and have their place in desktop PCs. However, they can also be used with mini-PCs by using an external GPU enclosure.
-
What does this Max. 3 concurrent stream cap mean anway?
As there's no NVENC patch available (yet) for the Beta driver branch - referring to this one: https://github.com/keylase/nvidia-patch - which can lift the limits of HW transcoding, I was now wondering a little, as I can see 5 (hw) streams on Plex, which actually shouldn't/cannot be the case no?
-
Is there somewhere that lists Nvidia GPUs.
I haven’t done this yet but there is a patch on GitHub that removes the limitation for consumer GPUs. Makes lower end cards more attractive for this type of work
llama.cpp
-
IBM Granite: A Family of Open Foundation Models for Code Intelligence
if you can compile stuff, then looking at llama.cpp (what ollama uses) is also interesting: https://github.com/ggerganov/llama.cpp
the server is here: https://github.com/ggerganov/llama.cpp/tree/master/examples/...
And you can search for any GGUF on huggingface
-
Ask HN: Affordable hardware for running local large language models?
Yes, Metal seems to allow a maximum of 1/2 of the RAM for one process, and 3/4 of the RAM allocated to the GPU overall. There’s a kernel hack to fix it, but that comes with the usual system integrity caveats. https://github.com/ggerganov/llama.cpp/discussions/2182
- Xmake: A modern C/C++ build tool
-
Better and Faster Large Language Models via Multi-Token Prediction
For anyone interested in exploring this, llama.cpp has an example implementation here:
https://github.com/ggerganov/llama.cpp/tree/master/examples/...
- Llama.cpp Bfloat16 Support
-
Fine-tune your first large language model (LLM) with LoRA, llama.cpp, and KitOps in 5 easy steps
Getting started with LLMs can be intimidating. In this tutorial we will show you how to fine-tune a large language model using LoRA, facilitated by tools like llama.cpp and KitOps.
- GGML Flash Attention support merged into llama.cpp
-
Phi-3 Weights Released
well https://github.com/ggerganov/llama.cpp/issues/6849
- Lossless Acceleration of LLM via Adaptive N-Gram Parallel Decoding
- Llama.cpp Working on Support for Llama3
What are some alternatives?
vgpu_unlock - Unlock vGPU functionality for consumer grade GPUs.
ollama - Get up and running with Llama 3, Mistral, Gemma, and other large language models.
nvlax - Future-proof NvENC & NvFBC patcher (Linux/Windows)
gpt4all - gpt4all: run open-source LLMs anywhere
Sunshine - Self-hosted game stream host for Moonlight.
text-generation-webui - A Gradio web UI for Large Language Models. Supports transformers, GPTQ, AWQ, EXL2, llama.cpp (GGUF), Llama models.
wlroots - A modular Wayland compositor library
GPTQ-for-LLaMa - 4 bits quantization of LLaMA using GPTQ
unmanic - Unmanic - Library Optimiser
ggml - Tensor library for machine learning
Proxmox-Nvidia-LXC- - how to create an Proxmox LXC in 6.2-1
alpaca.cpp - Locally run an Instruction-Tuned Chat-Style LLM