seed_travel
clip-interrogator-ext
seed_travel | clip-interrogator-ext | |
---|---|---|
16 | 10 | |
302 | 464 | |
- | - | |
6.3 | 4.7 | |
11 months ago | 3 months ago | |
Python | Python | |
GNU General Public License v3.0 or later | MIT License |
Stars - the number of stars that a project has on GitHub. Growth - month over month growth in stars.
Activity is a relative number indicating how actively a project is being developed. Recent commits have higher weight than older ones.
For example, an activity of 9.0 indicates that a project is amongst the top 10% of the most actively developed projects that we are tracking.
seed_travel
-
a short seed travel
Seed travel is a technique and and a script for A1111: https://github.com/yownas/seed_travel
-
Transmigrations concert visuals remixes
For the video it turned out a bit too "hairy" compared to many of the still images (I believe because of the long landscape aspect ratio), but I ran out of time to fiddle. I used the Seed Travel extension for the animation and ChaiNNer with the 4x-Valar upscaler.
-
Most useful extensions for beginners, except ControlNet
Seed Travel and Clip Interrogator extensions are both listen in the extensions tab of a1111, so thats the easiest route. But sure: https://github.com/yownas/seed_travel and https://github.com/pharmapsychotic/clip-interrogator-ext
-
What is the theoretical max number of images that stable diffusion can generated?
smooth latent space https://github.com/yownas/seed_travel
- Trying out some Stable Diffusion seed travel stuff
-
How to achieve this barely visible transition?
To stick with one prompt and slowly move to another seed, use this script instead https://github.com/yownas/seed_travel
-
Use the seed_travel extension for automatic1111 to make some excellent "flickerless" animations
Get the seed_travel extension by yownas. Follow the instructions to install it via the webui.
-
Chika - Seed Travel extension
I've added a new feature to https://github.com/yownas/seed_travel where you can select different "Interpolation rates". This one uses "Slow start"
-
Best Option for Large Digital Wall Display?
Compressing the videos has become quite a project that involves the seed_travel script, a little imagemagick, upscaling with realSR, an absolute ton of interpolation with RIFE, and the swiss army knife of video tools, ffmpeg.
- Interpolation with openai/guided-diffusion
clip-interrogator-ext
-
Is there any way I can generate tons of images and rate them so the model adjusts to my taste?
Instead what I'd recommend is a manual loop to hone in on what prompts work well. You'll need two extensions, assuming you're using a1111: AestheticScorer and clip interrogator. The aesthetic scorer will effectively rate generated images and attach a score from 1 to 10 to the image Metadata. There are a few image viewers which can view and sort by aesthetic score, breadboard and diffusion toolkit I believe are popular options. You can filter by the score to effectively throw out lower quality Generations, allowing you to focus on some of the better quality ones for Taste matching. After that you can sort through the remaining and see which ones fit your tastes the best.
-
Does stable diffusion (or some other open source tool) have the equivalent of midjourneys /describe feature?
I think CLIP Interrogator may be what you're looking for.
- how do I get these style images?
-
What are some of the best and the easiest to install modules for this? I got web ui. And I got some face restoration thing. (Apologies, I'm not very smart.)
clip-interrogator-ext: to get a potential prompt from an image (I don't know if it works as I use a my own fork, because I use a newer version of tranformers)
-
Stable Diffusion to identify and tag objects in images ?
yeah this was my first thought too, https://github.com/pharmapsychotic/clip-interrogator-ext
-
King of the Fae
I basically started taking all of my best AI generated images as well as any images I see only that I really like and want to capture that asthetic in my works and use https://github.com/pharmapsychotic/clip-interrogator-ext.git extensions. Its Pharmapsychotics clip interrogator ext. Basically this long term process requires you to interrogate images you like often and then drop both positive and negative prompts from those images into your prompt. You will start out with a smaller prompt but it will grow over time if you keep adding tokens from your best images. In particular your negative prompt will start to look insane but its important to trust it just be careful to avoid repeating the same tokens and using any color specific language. I have used this method over the course of a couple of weeks to grow an existing prompt that has produced great images into behemoth prompt that has been minting extremely creative images for me in various angles colors and compositions. Another note of importance is that if you are constantly interrogating and adding prompt tokens from similar imagery as in women it will heavily bias your output towards women even if there are none in the positive prompt. Theoretically I feel what this is doing is narrowing down the models nodes to a very specific aesthetic that you are going for and therefore producing more provoking and top quality images especially in highly tuned models.
-
Most useful extensions for beginners, except ControlNet
Seed Travel and Clip Interrogator extensions are both listen in the extensions tab of a1111, so thats the easiest route. But sure: https://github.com/yownas/seed_travel and https://github.com/pharmapsychotic/clip-interrogator-ext
- Embedded Training my Face - Workflow Question
-
Just discovered a useful trick for getting good negative words.
Another way to do this is using the Clip Interrogator extension. This does a better job of analyzing the image and also does negatives. https://github.com/pharmapsychotic/clip-interrogator-ext.git
- img2txt, but with identifiable prompts?
What are some alternatives?
rife-ncnn-vulkan - RIFE, Real-Time Intermediate Flow Estimation for Video Frame Interpolation implemented with ncnn library
stable-diffusion-webui-wildcards - Wildcards
stable-diffusion-backend - Backend for my Stable diffusion project(s)
sd-webui-supermerger - model merge extention for stable diffusion web ui
pi_video_looper - Application to turn your Raspberry Pi into a dedicated looping video playback device, good for art installations, information displays, or just playing cat videos all day.
sd-webui-additional-networks
realsr-ncnn-vulkan - RealSR super resolution implemented with ncnn library
stable-diffusion-webui-images-browser - an images browse for stable-diffusion-webui
batchlinks-webui - Download several Huggingface, MEGA, and CivitAI links at once. SD webui extension. For colab.
sd-extension-aesthetic-scorer - Aesthetic Scorer extension for SD WebUI
Stable-Diffusion-Webui-Civitai-Helper - Stable Diffusion Webui Extension for Civitai, to manage your model much more easily.