CogVideo
glid-3-xl-stable
CogVideo | glid-3-xl-stable | |
---|---|---|
39 | 20 | |
3,512 | 286 | |
1.6% | - | |
2.4 | 0.0 | |
11 months ago | over 1 year ago | |
Python | Python | |
Apache License 2.0 | MIT License |
Stars - the number of stars that a project has on GitHub. Growth - month over month growth in stars.
Activity is a relative number indicating how actively a project is being developed. Recent commits have higher weight than older ones.
For example, an activity of 9.0 indicates that a project is amongst the top 10% of the most actively developed projects that we are tracking.
CogVideo
-
InstructPix2Pix Video: "Turn the wave into trash"
Additionally two open source demo models [CogVideo[(https://github.com/THUDM/CogVideo) by a groups of cs students a model by [Antonia Antonova](https://antonia.space/text-to-video-generation) and have presented their own innovative methods of generating video from text
-
Effortpost: The Future Of Media Synthesis and AI Art
The second thing that will happen is the appearance of AI video and audio. Google has shown two programs for video generation, one which is fairly high quality and the other which can make long videos with several scenes. Meta has also demonstrated their own. We've already seen other projects like CogVideo, as well as many others that are currently being worked on. It's likely that these techniques will become so refined that over the next year or two, they'll have a similar boom to image generation programs. And eventually, they'll have a similar application in video editing, once coherence is adequate enough. Select a person's shirt, and it stays that for the remainder of the scene. Change an actor's hairstyle in real time, or add characters that didn't exist into a scene and let the computer figure out the desired level of realism. This'll revolutionize VFX to a degree where making an effects heavy will be less about wrangling complex toolsets and more about making aesthetic choices of style and placement.
- AI Content Generation, Part 1: Machine Learning Basics
- Can we please make a general update on all the "most important" news/repos available?
-
Stable Diffusion Public Release – Stability.ai
Check out https://github.com/THUDM/CogVideo - progress is being made on coherent video generation.
Characters and dialogue are effectively solved, just look at GPT-3.
The entity behind StableDiffusion is also supporting generative music art, so let's see what is coming out of that: https://www.harmonai.org/
We are currently far away from generating a production quality movie with AI, but I don't think it's going to be nearly as long as a lifetime. In my opinion, we'll have high quality AI shorts within the decade.
-
How far away are we from have AI like DALL-E 2 be able to create other media like 3d models or video?
CogVideo and a CogView web app.
-
Does training transformers on large corpuses of music files have some hidden difficulty which makes it impossible?
A better comparison to AI music generation would be video generation, which has not improved much since i saw first examples some years ago. The last iteration is stuff like CogVideo and this is only able to generate 4 second videos with mid-strong artifacts.
-
[R] CogVideo: Large-scale Pretraining for Text-to-Video Generation via Transformers + Gradio Web Demo
github: https://github.com/THUDM/CogVideo
- CogVideo: Code and 94B Model for Text-to-Video Generation via Transformers
-
CogVideo (text-to-video) model, code, and demo are available
GitHub repo.
glid-3-xl-stable
-
New inpainting model from RunwayML out
I don't know how you can say this but it's completely different than anything we had before, the only exception was https://github.com/Jack000/glid-3-xl-stable/wiki/Custom-inpainting-model, this model was a finituned version of v1.4 but not having a separate channel for the original image and the mask makes it weaker.
-
Local inpainting/outpainting GUIs/Programs?
Check out the item by lkwq007 in this list https://www.reddit.com/r/StableDiffusion/comments/wqaizj/list_of_stable_diffusion_systems/ , and also the model for this web app https://replicate.com/devxpy/glid-3-xl-stable , which I believe is this https://github.com/Jack000/glid-3-xl-stable .
-
I'm building my own image editor using canvas and Stable Diffusion AI model
Right now I am using different, better optimized model for just outpaiting/inpating using this https://github.com/Jack000/glid-3-xl-stable as base
-
getimg.ai - I've made outpainting/inpainting editor publicly available
I'm using a slightly modified and optimized version of https://github.com/Jack000/glid-3-xl-stable for inpainting/outpainting.
-
Inpainting/outpainting webapp UI with actually good inpainting capabilities, mobile support & more (using glid-3-xl-sd custom inpainting model) - patience.ai update
For this editor we've integrated Jack Qiao's excellent custom inpainting model from the glid-3-xl-sd project instead. This is a fine-tuned version of Stable Diffusion with significantly better inpainting capabilities than standard SD. You can read more about how it works here along with comparison images between it and regular SD.
-
Out/Inpainting Specialized Model (Jack's)
you cant. they are different architectures: https://github.com/Jack000/glid-3-xl-stable/issues/17
-
[Update] stablediffusion-infinity now becomes a web app with better UI (outpainting with Stable Diffusion on an infinite canvas)
I am wondering tho, if this one uses https://github.com/Jack000/glid-3-xl-stable/wiki/Custom-inpainting-model glid-3 inpainting?
- Will Stable Diffusion ever gain a better inpainting feature on par with Dalle, or is this a fundamental difference?
- Stable Diffusion, custom in/outpainting model
-
Progress on getimg.ai - outpainting prototype and other updates
(also check this custom SD inpainting/outpainting model, it's easily the best i've seen https://github.com/Jack000/glid-3-xl-stable/wiki/Custom-inpainting-model)
What are some alternatives?
stable-diffusion-ui - Easiest 1-click way to install and use Stable Diffusion on your computer. Provides a browser UI for generating images from text prompts and images. Just enter your text prompt, and see the generated image. [Moved to: https://github.com/easydiffusion/easydiffusion]
diffusion-ui - Frontend for deeplearning Image generation
dalle-playground - A playground to generate images from any text prompt using Stable Diffusion (past: using DALL-E Mini)
Dreambooth-Stable-Diffusion - Implementation of Dreambooth (https://arxiv.org/abs/2208.12242) by way of Textual Inversion (https://arxiv.org/abs/2208.01618) for Stable Diffusion (https://arxiv.org/abs/2112.10752). Tweaks focused on training faces, objects, and styles.
stable-diffusion - Optimized Stable Diffusion modified to run on lower GPU VRAM
stable-diffusion-webui-feature-showcase - Feature showcase for stable-diffusion-webui
stable-diffusion - Latent Text-to-Image Diffusion
stable-diffusion - A latent text-to-image diffusion model
awesome-stable-diffusion - Curated list of awesome resources for the Stable Diffusion AI Model.
imagen-pytorch - Implementation of Imagen, Google's Text-to-Image Neural Network, in Pytorch
stable-diffusion-webui - Stable Diffusion web UI