wireguard-tools
llama.cpp
wireguard-tools | llama.cpp | |
---|---|---|
12 | 773 | |
439 | 57,463 | |
2.1% | - | |
3.2 | 10.0 | |
12 days ago | about 13 hours ago | |
C | C++ | |
GNU General Public License v3.0 only | MIT License |
Stars - the number of stars that a project has on GitHub. Growth - month over month growth in stars.
Activity is a relative number indicating how actively a project is being developed. Recent commits have higher weight than older ones.
For example, an activity of 9.0 indicates that a project is amongst the top 10% of the most actively developed projects that we are tracking.
wireguard-tools
-
jc: Converts the output of popular command-line tools to JSON
Oh, this is cool. I'm a huge proponent of CLI tools supporting sensible JSON output, and things like https://github.com/WireGuard/wireguard-tools/blob/master/con... and PowerShell's |ConvertTo-Json are a huge part of my management/monitoring automation efforts.
But, unfortunately, sensible is doing some heavy lifting here and reality is... well, reality. While the output of things like the LSI/Broadcom StorCLI 'suffix the command with J' approach and some of PowerShell's COM-hiding wrappers (which are depressingly common) is technically JSON, the end result is so mindbogglingly complex-slash-useless, that you're quickly forced to revert to 'OK, just run some regexes on the plain-text output' kludges anyway.
Having said that, I'll definitely check this out. If the first example given, parsing dig output, is indeed representative of what this can reliably do, it should be interesting...
-
Write Posix Shell
> Possible? Maybe. Easy? No. Especially the “testable” part.
a testable shell script? Never seen one.
Thinking about scirpts I've read in the past, I remember seeing Jason Donenfeld's bash script for wireguard-wg and thinking how productive and readable it was,
https://github.com/WireGuard/wireguard-tools/blob/master/src...
- Accessing WireGuard VIA DDNS
- C# to C Struct
-
Identity Management for WireGuard
I see this when my equipment roams back into my private network and the wireguard server is inside that LAN. It can be solved by NAT'ing packets arriving on your edge router's inside interface, destinated to your outside IP, back to the inside wireguard server IP.
Alternatively if your client is Linux, there is:
https://github.com/WireGuard/wireguard-tools/tree/master/con...
-
wireguard-tools on FreeBSD (TrueNas), where do I find the reresolve-dns.sh script? (Or something similar)
you have a copy here that you can edit: https://github.com/WireGuard/wireguard-tools/blob/master/contrib/reresolve-dns/reresolve-dns.sh
- Dynamic DNS setting??
- wireguard-dns
-
Route only certain dynamic IPs through the WireGuard tunnel
You could adapt this script for it. What this one does is re-resolve the domain of the endpoint for when it's a dynamic dns. You run it on a timer from cron, and when your dynamic dns changes it will update the endpoint IP with wg set. You could adapt this script to update your AllowedIPs instead of the endpoint.
-
WireGuard MacOS DMG File
I found the GitHub Repository to wireguard-tools however, I cannot read the exact commands required to connect to a certain VPN! I've created a .conf file and was wondering how you could use that with WireGuard-tools to establish a VPN tunnel to my network?
llama.cpp
-
Better and Faster Large Language Models via Multi-Token Prediction
For anyone interested in exploring this, llama.cpp has an example implementation here:
https://github.com/ggerganov/llama.cpp/tree/master/examples/...
- Llama.cpp Bfloat16 Support
-
Fine-tune your first large language model (LLM) with LoRA, llama.cpp, and KitOps in 5 easy steps
Getting started with LLMs can be intimidating. In this tutorial we will show you how to fine-tune a large language model using LoRA, facilitated by tools like llama.cpp and KitOps.
- GGML Flash Attention support merged into llama.cpp
-
Phi-3 Weights Released
well https://github.com/ggerganov/llama.cpp/issues/6849
- Lossless Acceleration of LLM via Adaptive N-Gram Parallel Decoding
- Llama.cpp Working on Support for Llama3
-
Embeddings are a good starting point for the AI curious app developer
Have just done this recently for local chat with pdf feature in https://recurse.chat. (It's a macOS app that has built-in llama.cpp server and local vector database)
Running an embedding server locally is pretty straightforward:
- Get llama.cpp release binary: https://github.com/ggerganov/llama.cpp/releases
- Mixtral 8x22B
- Llama.cpp: Improve CPU prompt eval speed
What are some alternatives?
wireguard-apple - Mirror only. Official repository is at https://git.zx2c4.com/wireguard-apple
ollama - Get up and running with Llama 3, Mistral, Gemma, and other large language models.
HomeBrew - 🍺 The missing package manager for macOS (or Linux)
gpt4all - gpt4all: run open-source LLMs anywhere
CsWin32 - A source generator to add a user-defined set of Win32 P/Invoke methods and supporting types to a C# project.
text-generation-webui - A Gradio web UI for Large Language Models. Supports transformers, GPTQ, AWQ, EXL2, llama.cpp (GGUF), Llama models.
VxWireguard-Generator - Utility to generate VXLAN over Wireguard mesh SD-WAN configuration
GPTQ-for-LLaMa - 4 bits quantization of LLaMA using GPTQ
textfsm - Python module for parsing semi-structured text into python tables.
ggml - Tensor library for machine learning
tailscale - The easiest, most secure way to use WireGuard and 2FA.
alpaca.cpp - Locally run an Instruction-Tuned Chat-Style LLM