EasyMocap
sd-webui-text2video
EasyMocap | sd-webui-text2video | |
---|---|---|
4 | 29 | |
3,330 | 1,248 | |
1.8% | 1.7% | |
6.7 | 9.0 | |
about 1 month ago | 4 months ago | |
Python | Python | |
GNU General Public License v3.0 or later | GNU General Public License v3.0 or later |
Stars - the number of stars that a project has on GitHub. Growth - month over month growth in stars.
Activity is a relative number indicating how actively a project is being developed. Recent commits have higher weight than older ones.
For example, an activity of 9.0 indicates that a project is amongst the top 10% of the most actively developed projects that we are tracking.
EasyMocap
-
Tools For AI Animation and Filmmaking , Community Rules, ect. (**FAQ**)
EasyMocap (Generate Motion Capture Data from Video) https://github.com/zju3dv/EasyMocap -------Text 2 Video--------
-
Introduction
It might be fun to work on an open source game if I could find one and try something new, or if I were really confident in my abilities perhaps I would try something like the following repo that I forked for this blog post as per the instructions: https://github.com/sfrunza13/EasyMocap (this is the fork) https://github.com/zju3dv/EasyMocap (this is the original URL)
- EasyMocap: Toolbox for markerless human motion capture from RGB videos
-
Can an ugly game still sell?
machine learning has advanced to the point that you can extract 3d pose info from a video. There are a handful of startups that have popped up recently that convert user-uploaded videos to animations. But you can get free stuff on github: https://github.com/zju3dv/EasyMocap
sd-webui-text2video
- Fat heroes
- SDXL 🤝 RealisticVision3 working together
- Testing Zeroscope v2 Text-to-Video using vid2vid
- zeroscope_v2_XL: a new open source 1024x576 video model designed to take on Gen-2
-
Fresh Pasta of Bel-Air
Link to Txt2Video extension: https://github.com/kabachuha/sd-webui-text2video
- WELCOME TO OLLIVANDER'S. Overriding my usual bad footage (& voiceover), The head, hands & clothes were created separately in detail in stable diffusion using my temporal consistency technique and then merged back together. The background was also Ai, animated using a created depthmap.
-
surfs up, poodles ! text to video, Modelscope
Thanks! I'm using a Touchdesigner setup + UI I've built that uses the api in https://github.com/kabachuha/sd-webui-text2video for automatic1111.
- How to Text 2 video?
- First Open-Source 1024x576 Text To Video Model (potat1) is out!
- "Acid Rain" (ModelScope text2video / Zeroscope 320x) [4K]
What are some alternatives?
Thin-Plate-Spline-Motion-Model - [CVPR 2022] Thin-Plate Spline Motion Model for Image Animation.
automatic - SD.Next: Advanced Implementation of Stable Diffusion and other Diffusion-based generative image models
EasyMocap - Make human motion capture easier.
ebsynth_utility - AUTOMATIC1111 UI extension for creating videos using img2img and ebsynth.
TelegramGPT - simple basic python script to introduce Telegram Ai chatBot using DALL-E
stable-diffusion-webui - Stable Diffusion web UI
sd_dreambooth_extension
sd-webui-modelscope-text2video - Auto1111 extension consisting of implementation of text2video diffusion models (like ModelScope or VideoCrafter) using only Auto1111 webui dependencies [Moved to: https://github.com/deforum-art/sd-webui-text2video]
stable-diffusion-webui-normalmap-script - Normal Maps for Stable Diffusion WebUI
sd-webui-dragGAN-extension - extension of stable diffusion webui for dragGAN
aries-cloudagent-python - Hyperledger Aries Cloud Agent Python (ACA-Py) is a foundation for building decentralized identity applications and services running in non-mobile environments.
sd-webui-deforum - Deforum extension for AUTOMATIC1111's Stable Diffusion webui