stable-diffusion-webui-docker
llama.cpp
stable-diffusion-webui-docker | llama.cpp | |
---|---|---|
58 | 775 | |
6,045 | 57,463 | |
- | - | |
6.3 | 10.0 | |
8 days ago | 2 days ago | |
Shell | C++ | |
GNU General Public License v3.0 or later | MIT License |
Stars - the number of stars that a project has on GitHub. Growth - month over month growth in stars.
Activity is a relative number indicating how actively a project is being developed. Recent commits have higher weight than older ones.
For example, an activity of 9.0 indicates that a project is amongst the top 10% of the most actively developed projects that we are tracking.
stable-diffusion-webui-docker
-
‘Nudify’ Apps That Use AI to ‘Undress’ Women in Photos Are Soaring in Popularity
I also use the Stable Diffusion WebUI Docker, I found it really easy to set up.
-
ComfyUI docker images
I'm currently using this docker setup: https://github.com/AbdBarho/stable-diffusion-webui-docker
-
Can't figure out how to add new models to docker install of Automatic1111
I used the Docker install from here. It was easy to get Automatic1111's web interface up and running, but I'm trying to add new models and I can't figure out how to do it.
-
A1111 model folders in WSL
Docker ftw
- What Stable Diffusion local install or online would you recommend/is your favorite?
-
Infill on large images in Automatic1111 webui
I'm using A1111 via https://github.com/AbdBarho/stable-diffusion-webui-docker which was such a perfectly simple way to get set up, and I've been having an absolute blast. Turns out that for the most part I do not even need to leverage the 24GB vram on my 3090.
- Midjourney is getting ridiculous with the prompts they're banning. Agree/disagree?
- Synthetic data generation for model training · Issue #350 · CompVis/stable-diffusion
- What is the best alternative to midjourney?
- [Self Hosted] Je recherche un tutoriel à jour pour installer une diffusion stable sur proxmox? Y a-t-il une telle chose là-bas?
llama.cpp
-
Ask HN: Affordable hardware for running local large language models?
Yes, Metal seems to allow a maximum of 1/2 of the RAM for one process, and 3/4 of the RAM allocated to the GPU overall. There’s a kernel hack to fix it, but that comes with the usual system integrity caveats. https://github.com/ggerganov/llama.cpp/discussions/2182
- Xmake: A modern C/C++ build tool
-
Better and Faster Large Language Models via Multi-Token Prediction
For anyone interested in exploring this, llama.cpp has an example implementation here:
https://github.com/ggerganov/llama.cpp/tree/master/examples/...
- Llama.cpp Bfloat16 Support
-
Fine-tune your first large language model (LLM) with LoRA, llama.cpp, and KitOps in 5 easy steps
Getting started with LLMs can be intimidating. In this tutorial we will show you how to fine-tune a large language model using LoRA, facilitated by tools like llama.cpp and KitOps.
- GGML Flash Attention support merged into llama.cpp
-
Phi-3 Weights Released
well https://github.com/ggerganov/llama.cpp/issues/6849
- Lossless Acceleration of LLM via Adaptive N-Gram Parallel Decoding
- Llama.cpp Working on Support for Llama3
-
Embeddings are a good starting point for the AI curious app developer
Have just done this recently for local chat with pdf feature in https://recurse.chat. (It's a macOS app that has built-in llama.cpp server and local vector database)
Running an embedding server locally is pretty straightforward:
- Get llama.cpp release binary: https://github.com/ggerganov/llama.cpp/releases
What are some alternatives?
stable-diffusion-webui - Stable Diffusion web UI
ollama - Get up and running with Llama 3, Mistral, Gemma, and other large language models.
lxc-gpu - Enjoy computation resources sharing at your laboratory with lxc-gpu!
gpt4all - gpt4all: run open-source LLMs anywhere
fast-stable-diffusion - fast-stable-diffusion + DreamBooth
text-generation-webui - A Gradio web UI for Large Language Models. Supports transformers, GPTQ, AWQ, EXL2, llama.cpp (GGUF), Llama models.
diffusionbee-stable-diffusion-ui - Diffusion Bee is the easiest way to run Stable Diffusion locally on your M1 Mac. Comes with a one-click installer. No dependencies or technical knowledge needed.
GPTQ-for-LLaMa - 4 bits quantization of LLaMA using GPTQ
stable-diffusion-docker - Run the official Stable Diffusion releases in a Docker container with txt2img, img2img, depth2img, pix2pix, upscale4x, and inpaint.
ggml - Tensor library for machine learning
rocm-gfx803
alpaca.cpp - Locally run an Instruction-Tuned Chat-Style LLM