vgpu-proxmox
lollms-webui
vgpu-proxmox | lollms-webui | |
---|---|---|
34 | 7 | |
- | 3,842 | |
- | 5.0% | |
- | 9.9 | |
- | 5 days ago | |
Vue | ||
- | Apache License 2.0 |
Stars - the number of stars that a project has on GitHub. Growth - month over month growth in stars.
Activity is a relative number indicating how actively a project is being developed. Recent commits have higher weight than older ones.
For example, an activity of 9.0 indicates that a project is amongst the top 10% of the most actively developed projects that we are tracking.
vgpu-proxmox
-
PAID GIG: PROXMOX Gpu Passthrough help
This is a good guide on setting up vGPU. https://gitlab.com/polloloco/vgpu-proxmox
-
VirGL
A better way [for me, anyways] is getting GRID drivers running. However, this only works with the 9xx cards up to the 2080.
https://gitlab.com/polloloco/vgpu-proxmox
- Rtx 3060 Ti GDDR6X possible as vGPU?
-
Run vGPU on PVE 7.4.3 and split GPU out to Ubuntu VMs
We've followed PolloLoco's guide and confirmed we had vGPU functioning on the host and could pass vGPU with an A profile to a Windows 10 VM. Profile was overridden to the following (NOTE: nvidia-49 is a Q profile):
- nvidia-smi
-
Looks like its new server day!!
Look into proxmox and vgpu, craft has a couple videos on it itself. If you have an ampere + (rtx 3k,rtx 4k series)card, you're out of luck. At least for the "free" version. Also, don't use his videos as step by step guides. They're outdated but good for a general "road map" of his it'll be to set up. I followed this gitlab to setup a test server with a 1660 ti. In theory you should be able to do a windows and Linux VM off the same card. I say in theory because the 5 minutes I tried to set it up it didn't work. I was in a hurry and just haven't gotten back to trying it out.
- vGPU on PVE 7.4.3
-
Unable to Passthrough Nvidia T1000 to Linux Virtual Machine
Although not quite what you want, the T1000, as a Turing-based card, is one of the ones that can support the vGPU hack: https://gitlab.com/polloloco/vgpu-proxmox
-
No more hardware for the family... VDI is in.
For the GPU portion, you can unlock vGPU support in proxmox for free with this script
-
Any ideas to play around AI projects?
Most AI projects are going to need a GPU to work. If you really want to get into AI I recommend moving to proxmox and use a Vgpu unlock script to split your GPUs between VMs instead of being limited to one pass through GPU per VM.
lollms-webui
- Show HN: I made an app to use local AI as daily driver
-
What is the best text web ui currently?
anyone know what has the most models supported & fastest web ui? or atleast what everyone is using. Ive seen https://github.com/oobabooga/text-generation-webui and https://github.com/ParisNeo/lollms-webui.
-
Stable Diffusion Backups, git/lfs alternatives, OpenAI's attack on Open Source
I used this: https://github.com/nomic-ai/gpt4all-ui
-
Any ideas to play around AI projects?
Gpt4all is a self hosted version of chatgpt
- Local Alternatives of ChatGPT and Midjourney
- A self-hostable ChatGPT like LLM based chatbot with web UI
What are some alternatives?
Easy-GPU-PV - A Project dedicated to making GPU Partitioning on Windows easier!
SillyTavern-Extras - Extensions API for SillyTavern.
vGPU_LicenseBypass - A simple script that works around Nvidia vGPU licensing with a scheduled task.
SillyTavern - LLM Frontend for Power Users.
stable-diffusion-webui - Stable Diffusion web UI
text-generation-webui - A Gradio web UI for Large Language Models. Supports transformers, GPTQ, AWQ, EXL2, llama.cpp (GGUF), Llama models.
vgpu_unlock - Unlock vGPU functionality for consumer grade GPUs.
vgpu_unlock-rs - Unlock vGPU functionality for consumer grade GPUs
simple-proxy-for-tavern
fedora-acs-override - Using the ACS override patch for Fedora to split identical hardware in the kernel
pythia - The hub for EleutherAI's work on interpretability and learning dynamics