stable-diffusion-webui-docker
llama.cpp
Our great sponsors
stable-diffusion-webui-docker | llama.cpp | |
---|---|---|
58 | 769 | |
6,013 | 56,891 | |
- | - | |
6.3 | 10.0 | |
5 days ago | 1 day ago | |
Shell | C++ | |
GNU General Public License v3.0 or later | MIT License |
Stars - the number of stars that a project has on GitHub. Growth - month over month growth in stars.
Activity is a relative number indicating how actively a project is being developed. Recent commits have higher weight than older ones.
For example, an activity of 9.0 indicates that a project is amongst the top 10% of the most actively developed projects that we are tracking.
stable-diffusion-webui-docker
-
‘Nudify’ Apps That Use AI to ‘Undress’ Women in Photos Are Soaring in Popularity
I also use the Stable Diffusion WebUI Docker, I found it really easy to set up.
-
ComfyUI docker images
I'm currently using this docker setup: https://github.com/AbdBarho/stable-diffusion-webui-docker
-
Can't figure out how to add new models to docker install of Automatic1111
I used the Docker install from here. It was easy to get Automatic1111's web interface up and running, but I'm trying to add new models and I can't figure out how to do it.
-
A1111 model folders in WSL
Docker ftw
- What Stable Diffusion local install or online would you recommend/is your favorite?
-
Infill on large images in Automatic1111 webui
I'm using A1111 via https://github.com/AbdBarho/stable-diffusion-webui-docker which was such a perfectly simple way to get set up, and I've been having an absolute blast. Turns out that for the most part I do not even need to leverage the 24GB vram on my 3090.
- Midjourney is getting ridiculous with the prompts they're banning. Agree/disagree?
- Synthetic data generation for model training · Issue #350 · CompVis/stable-diffusion
- What is the best alternative to midjourney?
- [Self Hosted] Je recherche un tutoriel à jour pour installer une diffusion stable sur proxmox? Y a-t-il une telle chose là-bas?
llama.cpp
-
Phi-3 Weights Released
well https://github.com/ggerganov/llama.cpp/issues/6849
- Lossless Acceleration of LLM via Adaptive N-Gram Parallel Decoding
- Llama.cpp Working on Support for Llama3
-
Embeddings are a good starting point for the AI curious app developer
Have just done this recently for local chat with pdf feature in https://recurse.chat. (It's a macOS app that has built-in llama.cpp server and local vector database)
Running an embedding server locally is pretty straightforward:
- Get llama.cpp release binary: https://github.com/ggerganov/llama.cpp/releases
- Mixtral 8x22B
- Llama.cpp: Improve CPU prompt eval speed
-
Ollama 0.1.32: WizardLM 2, Mixtral 8x22B, macOS CPU/GPU model split
Ah, thanks for this! I can't edit my parent comment that you replied to any longer unfortunately.
As I said, I only compared the contributors graphs [0] and checked for overlaps. But those apparently only go back about year and only list at most 100 contributors ranked by number of commits.
[0]: https://github.com/ollama/ollama/graphs/contributors and https://github.com/ggerganov/llama.cpp/graphs/contributors
-
KodiBot - Local Chatbot App for Desktop
KodiBot is a desktop app that enables users to run their own AI chat assistants locally and offline on Windows, Mac, and Linux operating systems. KodiBot is a standalone app and does not require an internet connection or additional dependencies to run local chat assistants. It supports both Llama.cpp compatible models and OpenAI API.
-
Mixture-of-Depths: Dynamically allocating compute in transformers
There are already some implementations out there which attempt to accomplish this!
Here's an example: https://github.com/silphendio/sliced_llama
A gist pertaining to said example: https://gist.github.com/silphendio/535cd9c1821aa1290aa10d587...
Here's a discussion about integrating this capability with ExLlama: https://github.com/turboderp/exllamav2/pull/275
And same as above but for llama.cpp: https://github.com/ggerganov/llama.cpp/issues/4718#issuecomm...
-
The lifecycle of a code AI completion
For those who might not be aware of this, there is also an open source project on GitHub called "Twinny" which is an offline Visual Studio Code plugin equivalent to Copilot: https://github.com/rjmacarthy/twinny
It can be used with a number of local model services. Currently for my setup on a NVIDIA 4090, I'm running both the base and instruct model for deepseek-coder 6.7b using 5_K_M Quantization GGUF files (for performance) through llama.cpp "server" where the base model is for completions and the instruct model for chat interactions.
llama.cpp: https://github.com/ggerganov/llama.cpp/
deepseek-coder 6.7b base GGUF files: https://huggingface.co/TheBloke/deepseek-coder-6.7B-base-GGU...
deepseek-coder 6.7b instruct GGUF files: https://huggingface.co/TheBloke/deepseek-coder-6.7B-instruct...
What are some alternatives?
stable-diffusion-webui - Stable Diffusion web UI
ollama - Get up and running with Llama 3, Mistral, Gemma, and other large language models.
lxc-gpu - Enjoy computation resources sharing at your laboratory with lxc-gpu!
gpt4all - gpt4all: run open-source LLMs anywhere
fast-stable-diffusion - fast-stable-diffusion + DreamBooth
text-generation-webui - A Gradio web UI for Large Language Models. Supports transformers, GPTQ, AWQ, EXL2, llama.cpp (GGUF), Llama models.
diffusionbee-stable-diffusion-ui - Diffusion Bee is the easiest way to run Stable Diffusion locally on your M1 Mac. Comes with a one-click installer. No dependencies or technical knowledge needed.
GPTQ-for-LLaMa - 4 bits quantization of LLaMA using GPTQ
stable-diffusion-docker - Run the official Stable Diffusion releases in a Docker container with txt2img, img2img, depth2img, pix2pix, upscale4x, and inpaint.
ggml - Tensor library for machine learning
rocm-gfx803
alpaca.cpp - Locally run an Instruction-Tuned Chat-Style LLM