CogVideo
stable-diffusion
CogVideo | stable-diffusion | |
---|---|---|
39 | 40 | |
3,512 | 594 | |
1.6% | - | |
2.4 | 0.0 | |
11 months ago | over 1 year ago | |
Python | Jupyter Notebook | |
Apache License 2.0 | GNU General Public License v3.0 or later |
Stars - the number of stars that a project has on GitHub. Growth - month over month growth in stars.
Activity is a relative number indicating how actively a project is being developed. Recent commits have higher weight than older ones.
For example, an activity of 9.0 indicates that a project is amongst the top 10% of the most actively developed projects that we are tracking.
CogVideo
-
InstructPix2Pix Video: "Turn the wave into trash"
Additionally two open source demo models [CogVideo[(https://github.com/THUDM/CogVideo) by a groups of cs students a model by [Antonia Antonova](https://antonia.space/text-to-video-generation) and have presented their own innovative methods of generating video from text
-
Effortpost: The Future Of Media Synthesis and AI Art
The second thing that will happen is the appearance of AI video and audio. Google has shown two programs for video generation, one which is fairly high quality and the other which can make long videos with several scenes. Meta has also demonstrated their own. We've already seen other projects like CogVideo, as well as many others that are currently being worked on. It's likely that these techniques will become so refined that over the next year or two, they'll have a similar boom to image generation programs. And eventually, they'll have a similar application in video editing, once coherence is adequate enough. Select a person's shirt, and it stays that for the remainder of the scene. Change an actor's hairstyle in real time, or add characters that didn't exist into a scene and let the computer figure out the desired level of realism. This'll revolutionize VFX to a degree where making an effects heavy will be less about wrangling complex toolsets and more about making aesthetic choices of style and placement.
- AI Content Generation, Part 1: Machine Learning Basics
- Can we please make a general update on all the "most important" news/repos available?
-
Stable Diffusion Public Release – Stability.ai
Check out https://github.com/THUDM/CogVideo - progress is being made on coherent video generation.
Characters and dialogue are effectively solved, just look at GPT-3.
The entity behind StableDiffusion is also supporting generative music art, so let's see what is coming out of that: https://www.harmonai.org/
We are currently far away from generating a production quality movie with AI, but I don't think it's going to be nearly as long as a lifetime. In my opinion, we'll have high quality AI shorts within the decade.
-
How far away are we from have AI like DALL-E 2 be able to create other media like 3d models or video?
CogVideo and a CogView web app.
-
Does training transformers on large corpuses of music files have some hidden difficulty which makes it impossible?
A better comparison to AI music generation would be video generation, which has not improved much since i saw first examples some years ago. The last iteration is stuff like CogVideo and this is only able to generate 4 second videos with mid-strong artifacts.
-
[R] CogVideo: Large-scale Pretraining for Text-to-Video Generation via Transformers + Gradio Web Demo
github: https://github.com/THUDM/CogVideo
- CogVideo: Code and 94B Model for Text-to-Video Generation via Transformers
-
CogVideo (text-to-video) model, code, and demo are available
GitHub repo.
stable-diffusion
- Stable Diffusion links from around September 12, 2022 that I collected for further processing
- Stable Diffusion links from around September 16, 2022 that I collected for further processing
-
Can't install neonsecret's fork
1. git clone https://github.com/neonsecret/stable-diffusion 2. pip install --upgrade -r requirements.txt 3. conda env create -f environment.yaml
- AI Art: Dantooine Jedi Enclave, Unimaginably cool I can make fanart for any game
-
Please recommend a way to run SD on 4GB Nvidia on Ubuntu
neonsecret's fork is the only one I can get to run on my 4gb GeForce GTX 1050 Ti. I also use OptiomizedSD "just" the optimizedsd scripts folder copied over to neonsecrets. I've never been able to get automatic1111's fork to work for me.
-
Everything has worked flawlessly so far except this command. Any idea as to what the issue might be?
You can also clone neonsecret's version of optimized repository, if you want a better GUI, or use Arki's guide for AUTOMATIC1111's repo, which also has an optimized mode, and is pretty feature-packed.
-
Why can't I use Stable Diffusion?
sd gui
-
The first 4k picture ever produced by neural networks
Hey guys, today I produced the first ever 4k image using this: https://github.com/neonsecret/stable-diffusion/
-
Best GUI overall?
https://github.com/neonsecret/stable-diffusion/ https://github.com/neonsecret/neonpeacasso I have two of those, for both low-end and high-end GPUs
-
Literally 4k (3840x2176)
using https://github.com/neonsecret/stable-diffusion
What are some alternatives?
stable-diffusion-ui - Easiest 1-click way to install and use Stable Diffusion on your computer. Provides a browser UI for generating images from text prompts and images. Just enter your text prompt, and see the generated image. [Moved to: https://github.com/easydiffusion/easydiffusion]
stable-diffusion - Optimized Stable Diffusion modified to run on lower GPU VRAM
dalle-playground - A playground to generate images from any text prompt using Stable Diffusion (past: using DALL-E Mini)
stable-diffusion-rocm
stable-diffusion-webui - Stable Diffusion web UI
stable-diffusion-webui-feature-showcase - Feature showcase for stable-diffusion-webui
stable-diffusion
stable-diffusion - A latent text-to-image diffusion model
stable-diffusion
imagen-pytorch - Implementation of Imagen, Google's Text-to-Image Neural Network, in Pytorch