Proxmox
LocalAI
Proxmox | LocalAI | |
---|---|---|
59 | 82 | |
9,981 | 19,862 | |
- | 7.1% | |
9.9 | 9.9 | |
3 days ago | about 1 hour ago | |
Shell | C++ | |
MIT License | MIT License |
Stars - the number of stars that a project has on GitHub. Growth - month over month growth in stars.
Activity is a relative number indicating how actively a project is being developed. Recent commits have higher weight than older ones.
For example, an activity of 9.0 indicates that a project is amongst the top 10% of the most actively developed projects that we are tracking.
Proxmox
-
LXC or Portainer for media server
There's no one right way to do this. For me, I focused on isolation and containment. So I used LXCs for everything with Plex as the only Privileged container. All other *arrs remained unprivileged. My data resides on a NAS elsewhere in my network so I had to set up SMB sharing to all the LXCs and as you'll find out, that becomes less than intuitive. I wrote a guide here that details how I configured everything.
-
Moving Plex off Synology NAS. unRAID + Docker? Linux? TrueNAS?
I used the lxc script from tteck to set up a new plex installation in a lxc. https://github.com/tteck/Proxmox Installed nfs-common and updated /etc/fstab so I mounted all the media shares from my synology.
-
Proxmox VE Helper-Scripts – Scripts for Streamlining Your Homelab with Proxmox
The actual scripts: https://github.com/tteck/Proxmox
After poking at a couple of these, they seem like they're 50% shiny packaging and 50% one-liner bash commands.
-
Beginner: Proxmox + Jellyfin + TrueNas
Then, I tried installing Jellyfin using scripts. That went better than my previous attempt. However, I still could not get the mounted storage to appear on Jellyfin UI. I googled the issue and experimented with several solutions like mounting the storage on the Proxmox server. When I added a new library (using ip), Jellyfin started scanning. it seems like the Jellyfin is doing something because the CPU was running up to ~70%.so, I left it overnight. When I checked in the morning, I still don’t see any videos on Jellyfin UI. I checked the logs and I saw that it threw access denied error to a /proc/, but nothing why it couldn’t display files on the mounted storage. My current setup only has a built in GPU, 1 x 1TB SSD (im looking to add more storage later on) and the mounted storage only has 25 videos ~200-500 MB.
-
Selfhosted VPN advice for Homelab Access
The HA VM took about five minutes, all created by using this script: bash -c "$(wget -qLO - https://github.com/tteck/Proxmox/raw/main/vm/haos-vm.sh)"
-
nginx proxy manager.....driving me insane
I had an acl issues configuration using NPM on docker ( proxmox > LXC > Docker > NPM) when using IPV6. So now I'm using lxc on unprivileged LXC using this script. https://github.com/tteck/Proxmox
-
Tailscale, NGINX proxy manager and Cloudflare for 2 or 3 subnets
my major issue: I tried to install tailscale on nextcloudpi and it didn't work, even tried to use this command line: bash -c "$(wget -qLO - https://github.com/tteck/Proxmox/raw/main/misc/add-tailscale-lxc.sh)" -s 106 also didnt work for with nextcloudpi. But Tailscale installed perfectly on NPM.
-
Tplink omada software
I set up on proxmox with the help of this. https://github.com/tteck/Proxmox
-
Docker on proxmox?
This github repo is a great place to start with LXCs https://github.com/tteck/Proxmox
-
Export Docker Containers from Unraid?
bash -c "$(wget -qLO - https://github.com/tteck/Proxmox/raw/main/ct/scrypted.sh)"
LocalAI
- Drop-In Replacement for ChatGPT API
- Voxos.ai – An Open-Source Desktop Voice Assistant
- Ask HN: Set Up Local LLM
- FLaNK Stack Weekly 11 Dec 2023
- Is there any open source app to load a model and expose API like OpenAI?
-
What do you use to run your models?
If you're running this as a server, I would recommend LocalAI https://github.com/mudler/LocalAI
-
OpenAI Switch Kit: Swap OpenAI with any open-source model
LocalAI can do that: https://github.com/mudler/LocalAI
https://localai.io/features/openai-functions/
-
"ChatGPT romanesc"
De inspirație, LocalAI, un replacement la OpenAI. E deja hot pe GitHub.
-
Local LLM's to run on old iMac / Hardware
Your hardware should be fine for inferencing, as long as you don't bother trying to get the GPU working.
My $0.02 would be to try getting LocalAI running on your machine with OpenCL/CLBlas acceleration for your CPU. If you're running other things, you could limit the inferencing process to 2 or 3 threads. That should get it working; I've been able to inference even 13b models on cheap Rockchip SOCs. Your CPU should be fine, even if it's a little outdated.
LocalAI: https://github.com/mudler/LocalAI
Some decent models to start with:
TinyLlama (extremely small/fast): https://huggingface.co/TheBloke/TinyLlama-1.1B-Chat-v0.3-GGU...
Dolphin Mistral (larger size, better responses: https://huggingface.co/TheBloke/dolphin-2.1-mistral-7B-GGUF
-
Retrieval Augmented Generation in Go
Neither of this really requires OpenAI. You can do it with locally-running models via something like https://github.com/mudler/LocalAI
What are some alternatives?
vaultwarden - Unofficial Bitwarden compatible server written in Rust, formerly known as bitwarden_rs
gpt4all - gpt4all: run open-source LLMs anywhere
cockpit-file-sharing - A Cockpit plugin to easily manage samba and NFS file sharing.
ollama - Get up and running with Llama 3, Mistral, Gemma, and other large language models.
kubevirt - Kubernetes Virtualization API and runtime in order to define and manage virtual machines.
llama-cpp-python - Python bindings for llama.cpp
gravity-sync - 💫 The easy way to synchronize the DNS configuration of two Pi-hole 5.x instances.
private-gpt - Interact with your documents using the power of GPT, 100% privately, no data leaks
homer - A very simple static homepage for your server.
text-generation-webui - A Gradio web UI for Large Language Models. Supports transformers, GPTQ, AWQ, EXL2, llama.cpp (GGUF), Llama models.
DietPi - Lightweight justice for your single-board computer!
FastChat - An open platform for training, serving, and evaluating large language models. Release repo for Vicuna and Chatbot Arena.