monodepth2
IJCAI2023-CoNR
monodepth2 | IJCAI2023-CoNR | |
---|---|---|
6 | 4 | |
3,977 | 783 | |
0.5% | 0.5% | |
0.0 | 5.5 | |
8 months ago | 9 months ago | |
Jupyter Notebook | Jupyter Notebook | |
GNU General Public License v3.0 or later | MIT License |
Stars - the number of stars that a project has on GitHub. Growth - month over month growth in stars.
Activity is a relative number indicating how actively a project is being developed. Recent commits have higher weight than older ones.
For example, an activity of 9.0 indicates that a project is amongst the top 10% of the most actively developed projects that we are tracking.
monodepth2
- Calculation of an absolute depth map from multiple images or videos.
-
Easy to train a monocular (self) supervised depth estimation model?
I've used monodepth2 before and it's great: https://github.com/nianticlabs/monodepth2
-
Sources: Pixel 6 Pro was supposed to launch with face unlock
How can a single camera do that? My experience with computer vision is fairly limited so I'm curious how that would work. My understanding is you need to be able to generate a point map, stereo vision, or some non-CV related method , e.g. radar like the pixel 4. 2D depth estimation can be done with a single camera in somewhat useful way but it's not a secure way (https://github.com/nianticlabs/monodepth2 -- now somewhat similar functionality in OpenCV). Can you expand on what AI the single camera is being combined with that provides security guarantees?
- Can anyone explain the following github code to me. Also it’s my first time using GitHub so I’m completely lost.
-
Estimating camera height, orientation and field of view from a single monocular image.
I suspect you may have the best success by using monocular depth approaches (for example something like this: https://github.com/nianticlabs/monodepth2).
-
Looking for a fast monocular depth estimation library to use in a Rust project.
After that I have to do the same for Python I think, and then I have to find out how to figure out how to use a library like https://github.com/ialhashim/DenseDepth or https://github.com/nianticlabs/monodepth2 for that GStreamer plugin (or element, still trying to grasp the terminology here)
IJCAI2023-CoNR
-
CharTurner - A work in progress resource for character artists.
Maybe, or maybe not
-
Create an animation with a character sheet?
I wanted to know if anyone has created an animation with a character sheet using this project. It has a Google colab and says it can accept a 2d character sheet and a video and output an animation. I know the popular CharTurner embedding has been floating around and I wonder if anyone has made the attempt at it.
- "3Dキャラクターのダンス動画"を自動生成できる中国産AIがgithubで公開され話題に
- Render dancing videos from hand-drawn anime images
What are some alternatives?
DenseDepth - High Quality Monocular Depth Estimation via Transfer Learning
ml-course - Open Machine Learning course
packnet-sfm - TRI-ML Monocular Depth Estimation Repository
glasses - High-quality Neural Networks for Computer Vision 😎
cs231n - Note and Assignments for CS231n: Convolutional Neural Networks for Visual Recognition
open_clip - An open source implementation of CLIP.
torchdyn - A PyTorch library entirely dedicated to neural differential equations, implicit models and related numerical methods
HugsVision - HugsVision is a easy to use huggingface wrapper for state-of-the-art computer vision
ZoeDepth - Metric depth estimation from a single image
diffusers-interpret - Diffusers-Interpret 🤗🧨🕵️♀️: Model explainability for 🤗 Diffusers. Get explanations for your generated images.
DeepLearningExamples - State-of-the-Art Deep Learning scripts organized by models - easy to train and deploy with reproducible accuracy and performance on enterprise-grade infrastructure.