sd-webui-lobe-theme
llama.cpp
sd-webui-lobe-theme | llama.cpp | |
---|---|---|
77 | 778 | |
2,198 | 57,984 | |
6.5% | - | |
9.3 | 10.0 | |
4 days ago | 3 days ago | |
TypeScript | C++ | |
GNU Affero General Public License v3.0 | MIT License |
Stars - the number of stars that a project has on GitHub. Growth - month over month growth in stars.
Activity is a relative number indicating how actively a project is being developed. Recent commits have higher weight than older ones.
For example, an activity of 9.0 indicates that a project is amongst the top 10% of the most actively developed projects that we are tracking.
sd-webui-lobe-theme
-
Upscayl ā Free and Open Source AI Image Upscaler
upscayl is very approachable, but lacked many features i needed. i ended up using https://github.com/AUTOMATIC1111/stable-diffusion-webui after upscaling became part of my regular workflow, but for someone who just needs a few images enhanced, it's an ideal tool.
-
The Basics of AI Image Generation: How to create your own AI-generated image using Stable Diffusion on your local machine.
For the Git alternative, simply right-click on the location you want to put the Stable Diffusion and select āGit Bash Hereā, then paste this on the CLI: git clone https://github.com/AUTOMATIC1111/stable-diffusion-webui
-
Stable Cascade
ComfyUI is similar to Houdini in complexity, but immensely powerful. It's a joy to use.
There are also a large amount of resources available for it on YouTube, GitHub (https://github.com/comfyanonymous/ComfyUI_examples), reddit (https://old.reddit.com/r/comfyui), CivitAI, Comfy Workflows (https://comfyworkflows.com/), and OpenArt Flow (https://openart.ai/workflows/).
I still use AUTO1111 (https://github.com/AUTOMATIC1111/stable-diffusion-webui) and the recently released and heavily modified fork of AUTO1111 called Forge (https://github.com/lllyasviel/stable-diffusion-webui-forge).
-
Show HN: I made a local wrapper for Automatic 1111
Seems like an interesting project. Regarding the name, is there permission to use something so similar to AUTOMATIC1111 [1]?
> Diffusers will Cuda out of memory/perform very slowly for huge generations, like 2048x2048 images, while Auto 1111 SDK won't.
Do we have some numbers on this? I have seen AUTOMATIC1111 fall-over whilst using only half the available of GPU VRAM - there seems to be some weirdness where it tries to allocate before de-allocating the last batch or something.
> You can use any of the 6 compatible RealEsrgran models/weights with our RealEsrgran pipeline for upscaling images. Here are the model ids:
I've previously had trouble trying to use AUTOMATIC1111 upscalers, it seems like it needs more GPU VRAM than just generating the image already upscaled.
[1] https://github.com/AUTOMATIC1111/stable-diffusion-webui
-
Stable Code 3B: Coding on the Edge
You might be thinking of Fooocus: https://github.com/lllyasviel/Fooocus
The Stable Diffusion web interface that got a lot of people's attention originally was Automatic1111: https://github.com/AUTOMATIC1111/stable-diffusion-webui
Fooocus is definitely more beginner friendly. It does a lot of the prompt engineering for you. Automatic1111 has a ton of plugins, most notably ControlNet which gives you fine grained control over the images, but there is a learning curve.
- Google Imagen 2
-
Free or "practically-free" Ai picture generator?
Stable Diffusion https://github.com/AUTOMATIC1111/stable-diffusion-webui
-
Things to do, to put my old PC to use?
Make it into a stable diffusion server!
-
GTA 6 trailer screencaps, photorealistic style
There's no link version, you have to run it locally. You install it from here
-
Automatic1111 v1.7.0-RC published
Repository: AUTOMATIC1111/stable-diffusion-webui Ā· Tag: v1.7.0-RC Ā· Commit: 48fae7c Ā· Released by: AUTOMATIC1111
llama.cpp
-
IBM Granite: A Family of Open Foundation Models for Code Intelligence
if you can compile stuff, then looking at llama.cpp (what ollama uses) is also interesting: https://github.com/ggerganov/llama.cpp
the server is here: https://github.com/ggerganov/llama.cpp/tree/master/examples/...
And you can search for any GGUF on huggingface
-
Ask HN: Affordable hardware for running local large language models?
Yes, Metal seems to allow a maximum of 1/2 of the RAM for one process, and 3/4 of the RAM allocated to the GPU overall. Thereās a kernel hack to fix it, but that comes with the usual system integrity caveats. https://github.com/ggerganov/llama.cpp/discussions/2182
- Xmake: A modern C/C++ build tool
-
Better and Faster Large Language Models via Multi-Token Prediction
For anyone interested in exploring this, llama.cpp has an example implementation here:
https://github.com/ggerganov/llama.cpp/tree/master/examples/...
- Llama.cpp Bfloat16 Support
-
Fine-tune your first large language model (LLM) with LoRA, llama.cpp, and KitOps in 5 easy steps
Getting started with LLMs can be intimidating. In this tutorial we will show you how to fine-tune a large language model using LoRA, facilitated by tools like llama.cpp and KitOps.
- GGML Flash Attention support merged into llama.cpp
-
Phi-3 Weights Released
well https://github.com/ggerganov/llama.cpp/issues/6849
- Lossless Acceleration of LLM via Adaptive N-Gram Parallel Decoding
- Llama.cpp Working on Support for Llama3
What are some alternatives?
stable-diffusion-webui - Stable Diffusion web UI
ollama - Get up and running with Llama 3, Mistral, Gemma, and other large language models.
ComfyUI - The most powerful and modular stable diffusion GUI, api and backend with a graph/nodes interface.
gpt4all - gpt4all: run open-source LLMs anywhere
automatic - SD.Next: Advanced Implementation of Stable Diffusion and other Diffusion-based generative image models
text-generation-webui - A Gradio web UI for Large Language Models. Supports transformers, GPTQ, AWQ, EXL2, llama.cpp (GGUF), Llama models.
stable-diffusion-webui-directml - Stable Diffusion web UI
GPTQ-for-LLaMa - 4 bits quantization of LLaMA using GPTQ
stable-diffusion-webui-ux - Stable Diffusion web UI UX
ggml - Tensor library for machine learning
stable-diffusion-webui-colab - stable diffusion webui colab
alpaca.cpp - Locally run an Instruction-Tuned Chat-Style LLM