make-a-video-pytorch
stable-diffusion
make-a-video-pytorch | stable-diffusion | |
---|---|---|
6 | 17 | |
1,843 | 1,403 | |
- | - | |
3.4 | 2.9 | |
8 days ago | 4 months ago | |
Python | Jupyter Notebook | |
MIT License | MIT License |
Stars - the number of stars that a project has on GitHub. Growth - month over month growth in stars.
Activity is a relative number indicating how actively a project is being developed. Recent commits have higher weight than older ones.
For example, an activity of 9.0 indicates that a project is amongst the top 10% of the most actively developed projects that we are tracking.
make-a-video-pytorch
- How do I get this Python machine learning source code file to run?
- Imagic ( Google's Text-Based Image Editing ) implemented in Stable Diffusion
-
An AI that generates videos from text! | Make-A-Video Explained
► Pytorch implementation: https://github.com/lucidrains/make-a-video-pytorch
- New text2video and img2video model from Meta - someone implement this with SD please
- Lucidrains / Make-a-Video-PyTorch
-
Make-A-Video is a state-of-the-art AI system that generates videos from text
Amazing. And lucidrains is on the case as well: https://github.com/lucidrains/make-a-video-pytorch
stable-diffusion
-
Is it possible to merge VAEs?
Download this training project: git clone https://github.com/justinpinkney/stable-diffusion.git
-
how can i install this Image mixer onto automatic1111's webui.?
looks like it's using the https://github.com/justinpinkney/stable-diffusion/blob/4ac995b6f663b74dfe65400285e193d4167d259c/scripts/gradio_image_mixer.py to do the bulk of the work meaning the core functionality is built into stable diffusion, seems the UI just isn't built to support it. Their ckpt is here too https://huggingface.co/lambdalabs/image-mixer/tree/main.
-
Image Mixer CUDA Out of Memory
Any idea how to make Image Mixer work in this build? On RTX 3060 with 12Gb of memory I get the message:
- Ideas fo new feature for AI generation techniques
-
AI Image Editing from Text! Imagic Explained
References: ►Read the full article: https://www.louisbouchard.ai/imagic/ ►Kawar, B., Zada, S., Lang, O., Tov, O., Chang, H., Dekel, T., Mosseri, I. and Irani, M., 2022. Imagic: Text-Based Real Image Editing with Diffusion Models. arXiv preprint arXiv:2210.09276. ► Use it with Stable Diffusion: https://github.com/justinpinkney/stable-diffusion/blob/main/notebooks/imagic.ipynb ►My Newsletter (A new AI application explained weekly to your emails!): https://www.louisbouchard.ai/newsletter/
-
Imagic ( Google's Text-Based Image Editing ) implemented in Stable Diffusion
The notebook: https://github.com/justinpinkney/stable-diffusion/blob/main/notebooks/imagic.ipynb
-
[D] DreamBooth Stable Diffusion training in 10 GB VRAM, using xformers, 8bit adam, gradient checkpointing and caching latents.
There's a script for the SD --> Diffusers here: https://github.com/justinpinkney/stable-diffusion/blob/main/scripts/convert_sd_to_diffusers.py
-
[P] How to fine tune stable diffusion: how we made the text-to-pokemon model at Lambda
You can start with the github which contains the code: https://github.com/justinpinkney/stable-diffusion
-
Pokemon Stable Diffusion : A fine tuned model of Stable Diffusion to only create Pokemon
Hmmm, I just double checked the hashes of my local file, what's on huggingface, and what you showed above and they all match. I'm not familiar with that repo, so maybe something weird is going on. I tested it using the original txt2img script in the stable diffusion repo:
-
List of Stable Diffusion systems - Part 2
*PICK\* (Added Sep. 12, 2022) Web app Stable Diffusion Image Variations by lambdalabs. GitHub repo. Generates variations of an input image without use of a text prompt. Censored.
What are some alternatives?
NeROIC
cog-stable-diffusion - Diffusers Stable Diffusion as a Cog model
video-diffusion-pytorch - Implementation of Video Diffusion Models, Jonathan Ho's new paper extending DDPMs to Video Generation - in Pytorch
material_stable_diffusion - Tileable Stable Diffusion - Cog model
Clip-Forge
stability-sdk - SDK for interacting with stability.ai APIs (e.g. stable diffusion inference)
text2mesh - 3D mesh stylization driven by a text input in PyTorch
merge-models - Merges two latent diffusion models at a user-defined ratio
DALLE2-video - Direct application of DALLE-2 to video synthesis, using factored space-time Unet and Transformers
CrossAttentionControl - Unofficial implementation of "Prompt-to-Prompt Image Editing with Cross Attention Control" with Stable Diffusion
ez-text2video - Easily run text-to-video diffusion with customized video length, fps, and dimensions on 4GB video cards or on CPU.
authcompanion2 - An admin-friendly, User Management Server (with Passkeys & JWTs) - for seamless and secure integration of user authentication