AIwaifu
talking-head-anime-3-demo
AIwaifu | talking-head-anime-3-demo | |
---|---|---|
1 | 4 | |
315 | 868 | |
- | - | |
6.8 | 0.0 | |
about 1 month ago | 8 months ago | |
Python | Python | |
MIT License | MIT License |
Stars - the number of stars that a project has on GitHub. Growth - month over month growth in stars.
Activity is a relative number indicating how actively a project is being developed. Recent commits have higher weight than older ones.
For example, an activity of 9.0 indicates that a project is amongst the top 10% of the most actively developed projects that we are tracking.
AIwaifu
-
an AI waifu who can chat in English and speak to you in Japanese! with animation (also lewdable)
Repo Link: https://github.com/HRNPH/AIwaifu
talking-head-anime-3-demo
-
What would I use to create a chatbot but with an animated face of a man i created with Stable Diffusion - and also do it from my local pc, not any other websites etc. Anyone have any ideas on this?
Talking Head. This is the visual of converting the image into an animated character. This is honestly the hard part. There's vmagicmirror for 3d model vrm/vroid characters. And for anime characters there's talking head anime 3 if you modify it a bit. But other than that there's basically nothing. Some online services can do it for you but not local/offline.
-
100% AI including verse, photo, outpainting, animation, voice, and lip syncing
I'm currently using a modified version of this repo which does face/head posing for anime characters. Works okayish and I can run it around 18fps or so which is decent. Seems to be the best thing available atm. not really accurate lipsync (you have to manually enter mouth position) but I mapped "a" viseme to volume and it works well enough lol.
-
Is there a way to do facial rigs on AI images?
EasyVTuber is based off of Talking Head which has the ability to animate the mouth, eyes, and even the eyebrows with slight head turning and breathing with TrueDepth facial tracking (iPhone): https://github.com/pkhungurn/talking-head-anime-3-demo
-
Can't find root directory, help?
https://github.com/pkhungurn/talking-head-anime-3-demo I keep trying to get the above program to work, but I can't find the the repository's root directory so I can unzip the model files into the data/models folder. How do I find the repository's root directory?
What are some alternatives?
VTuber_Unity - Use Unity 3D character and Python deep learning algorithms to stream as a VTuber!
EasyVtuber - tha3, but run 40fps on 3080 with virtural webcam support
AI-Waifu-Vtuber - AI Vtuber for Streaming on Youtube/Twitch
squirrel-core - A Python library that enables ML teams to share, load, and transform data in a collaborative, flexible, and efficient way :chestnut:
vtuber-livechat-dataset - 📊 VTuber 1B: Billion-scale Live Chat and Moderation Event Dataset
BlenderNeRF - Easy NeRF synthetic dataset creation within Blender
Face-tracking-with-Anime-characters - Hello! I have made a Python project where YURI from the game doki doki literature club accesses the webcam and stares directly into the players soul. Hope you enjoy!
chatgpt-chan - An implementation of https://www.insider.com/tiktok-programmer-ai-girlfriend-waifu-euthanized-stopped-working-chatgpt-2023-1
OpenSeeFace - Robust realtime face and facial landmark tracking on CPU with Unity integration
Tsuki - Manga uncensoring scripts using DeepCreamPy & HentAI combined with custom scripts
Activeloop Hub - Data Lake for Deep Learning. Build, manage, query, version, & visualize datasets. Stream data real-time to PyTorch/TensorFlow. https://activeloop.ai [Moved to: https://github.com/activeloopai/deeplake]