kalidokit
animegan2-pytorch
kalidokit | animegan2-pytorch | |
---|---|---|
14 | 18 | |
5,203 | 4,339 | |
- | - | |
1.5 | 0.0 | |
10 months ago | over 1 year ago | |
TypeScript | Jupyter Notebook | |
MIT License | MIT License |
Stars - the number of stars that a project has on GitHub. Growth - month over month growth in stars.
Activity is a relative number indicating how actively a project is being developed. Recent commits have higher weight than older ones.
For example, an activity of 9.0 indicates that a project is amongst the top 10% of the most actively developed projects that we are tracking.
kalidokit
-
Avatar Pupettering with ThreeJS, ReadyPlayerMe, Kalidokit and MediaPipe
I try to animate a ReadyPlayer Me avatar using ThreeJS and Kalidokit (or something else) with MediaPipe Hollisitc Pose. Here is a working JSFiddle :
-
Any idea how to stream Kalidoface data to Unreal Engine Live Link so we can use it to drive Metahuman? or an alternative app for full body + face + fingers markerless mocap that works with UE Live Link?
kalidoface is made by /u/YeeMachineDev and seems to be fully open source https://github.com/yeemachine/kalidokit https://github.com/yeemachine/kalidoface-3d
- cute live2d
-
How To Make Vtuber Softwares?
In terms of software, what you linked used optical tracking. The underlying tech is commonly called machine/computer vision. You can develop your own tracking software, but there's some open source solutions, like OpenCV than make it faster to build apps. (You can use OpenCV with Python bindings to learn the basics somewhat quickly). What you linked uses Kalidokit, an open source solution using TensorFlow.js and Google's MediaPipe.
-
Vtube Studio Pro question
oh my apologies! may I suggest Kalidoface then? https://kalidoface.com/ free face and arm tracking! plus they have a very friendly discord for troubleshooting
-
I want to mod Kalidokit but I don't know where to start from
[JS newbie here] I just discovered kalidokit, a vtuber JS app by yeemachine, and I'm trying to figure out how I could play with its look and feel. Do you have any recommendations on how to proceed? In short, I don't know what are the relevant files in the repo, how they are connected and how to run the app on a local machine. Any help would be very helpful! Thanks.
-
Cool (online) places for 2022
kalidokit
- [P] KalidoKit ā Face, Pose, and Hand Tracking Kinematics
- "KalidoKit - Face, Pose, and Hand Tracking Kinematics" (in-browser animation of streamers)
- "KalidoKit - Face, Pose, and Hand Tracking Kinematics" (in-browser animation of streamer)
animegan2-pytorch
-
Python Mini Projects
rom PIL import Image import torch import IPython from IPython.display import display # upload images from google.colab import files uploaded = files.upload() # https://github.com/bryandlee/animegan2-pytorch # load models and face2paint utility function model_facev2 = torch.hub.load("bryandlee/animegan2-pytorch:main", "generator", pretrained="face_paint_512_v2") face2paint = torch.hub.load("bryandlee/animegan2-pytorch:main", "face2paint", size=512) for INPUT_IMG in ['KatLi.JPG']: img = Image.open(INPUT_IMG).convert("RGB") out_facev2 = face2paint(model_facev2, img) # display images display(img) display(out_facev2)
-
Stable Diffusion might be the holy grail.
I'll point you towards Prism, then. As far as style transfer goes, it's the best I've found so far. Extremely simple to run in colab and offline, plus the results are quite good for what it's doing. I used it a ton before I started running Face Portrait v2 offline as a general "art" filter for things I'm working on. It does stylization and some facial morphing in one pass, which adds a lot more personality than style transfer alone. You can also use it on anything, not just faces.
-
Anime Botez Live! Her name-a.. Borat
I would recommend https://github.com/bryandlee/animegan2-pytorch and have used it for other videos, but Iām still working on my own model, the results of which you see above šš»
-
The new doctor strange trailer, but I just applied an Anime filter to it :)
Sure, I used this Python library and the Inshot video editor https://github.com/bryandlee/animegan2-pytorch
- Need a 2D character design maker software/etc for a visual novel
-
Cool (online) places for 2022
PyTorch Implementation of AnimeGANv2
-
Try AnimeGANv2 with PyTorch on Google Colab
(2021.02.21) The pytorch version of AnimeGANv2 has been released, Be grateful to @bryandlee for his contribution.
-
For u/CheritheFox
AnimeGAN2 PyTorch
-
3D to 2D face AI for videos (AnimeGANv2)
github: https://github.com/bryandlee/animegan2-pytorch
source tweet: https://twitter.com/Yokohara\_h/status/1466521442686685188?s=20 web app(webcam): https://huggingface.co/spaces/nateraw/animegan-v2-for-videos web app(images): https://huggingface.co/spaces/akhaliq/AnimeGANv2 github: https://github.com/bryandlee/animegan2-pytorch
What are some alternatives?
OpenSeeFace - Robust realtime face and facial landmark tracking on CPU with Unity integration
ebsynth - Fast Example-based Image Synthesis and Style Transfer
face-api.js - JavaScript API for face detection and face recognition in the browser and nodejs with tensorflow.js
AnimeGANv3 - Use AnimeGANv3 to make your own animation works, including turning photos or videos into anime.
Hand-Gesture-Recognition
StyleCLIPDraw - Styled text-to-drawing synthesis method. Featured at IJCAI 2022 and the 2021 NeurIPS Workshop on Machine Learning for Creativity and Design
ProsePainter
cupscale - Image Upscaling GUI based on ESRGAN
gaze-detection - š Use machine learning in JavaScript to detect eye movements and build gaze-controlled experiences.
gradio - Build and share delightful machine learning apps, all in Python. š Star to support our work!
holo-schedule - One browser extension COVERs all scheduled and guerrilla livestreams.