deep-video-mvs
colmap
deep-video-mvs | colmap | |
---|---|---|
1 | 28 | |
205 | 6,794 | |
- | 2.7% | |
0.0 | 9.2 | |
almost 3 years ago | 4 days ago | |
Python | C++ | |
MIT License | GNU General Public License v3.0 or later |
Stars - the number of stars that a project has on GitHub. Growth - month over month growth in stars.
Activity is a relative number indicating how actively a project is being developed. Recent commits have higher weight than older ones.
For example, an activity of 9.0 indicates that a project is amongst the top 10% of the most actively developed projects that we are tracking.
deep-video-mvs
-
helpful pointers to state-of-the-art material for depth estimation from multi-view videos captured from cameras with arbitrary poses.
If deep learning is an option, then you might want to check out http://zak.murez.com/atlas/, https://zju3dv.github.io/neuralrecon/, https://github.com/ardaduz/deep-video-mvs and the references therein. These methods can be better than classical ones, especially if overfitted on a specific type of scene.
colmap
- Magic123: One Image to High-Quality 3D Object Generation
-
Drone mapping is pretty dang cool
Not saying its easy to use, but there is an application gui and it is free: https://github.com/colmap/colmap
-
Import many photogrammetry software's scenes into Blender
Colmap (Model folders (BIN and TXT), dense workspaces, NVM, PLY)
- Best options for monocular reconstruction?
-
improving camera pose estimation using multiple aruco markers
See colmap for example https://colmap.github.io/
-
2D images to 3D Object reconstruction
You're looking into a problem called photogrammetry, and a well-studied one at that. I'd recommend looking into "shape from motion" (sfm); specifically techniques that do "dense reconstruction." I'd recommend COLMAP to start with. It does pose estimation from images (e.g. you point it at a bunch of images and it will figure out the relative poses of the cameras that took them), as well as sparse and dense reconstcution.
-
Framework generate 3d meshes from camera images
COLMAP builds dense meshes from a collection of cameras https://colmap.github.io/
- Nerfstudio: A collaboration friendly studio for NeRFs
-
Neural Radiance Fields and input shape
I’ve seen references to using COLMAP (https://colmap.github.io/) to estimate camera position/pose, e.g. here
-
3D reconstruction of an object from videos/few images
Classical photogrammetry, where I agree with u/tdgros that the way to go is https://colmap.github.io/. There are actually better variants in literature but nothing is more reliable and user-friendly than COLMAP. This will give you a very precise point cloud, that can be meshed if needed.
What are some alternatives?
instant-ngp - Instant neural graphics primitives: lightning fast NeRF and more
Meshroom - 3D Reconstruction Software
nerf-pytorch - A PyTorch implementation of NeRF (Neural Radiance Fields) that reproduces the results.
OpenMVG (open Multiple View Geometry) - open Multiple View Geometry library. Basis for 3D computer vision and Structure from Motion.
SGDepth - [ECCV 2020] Self-Supervised Monocular Depth Estimation: Solving the Dynamic Object Problem by Semantic Guidance
Hierarchical-Localization - Visual localization made easy with hloc
STEPS - This is the official repository for ICRA-2023 paper "STEPS: Joint Self-supervised Nighttime Image Enhancement and Depth Estimation"
nerf - Code release for NeRF (Neural Radiance Fields)
simplerecon - [ECCV 2022] SimpleRecon: 3D Reconstruction Without 3D Convolutions
openMVS - open Multi-View Stereo reconstruction library