depthai
colmap
depthai | colmap | |
---|---|---|
1 | 28 | |
869 | 6,794 | |
1.2% | 2.7% | |
8.6 | 9.2 | |
8 days ago | 4 days ago | |
Python | C++ | |
MIT License | GNU General Public License v3.0 or later |
Stars - the number of stars that a project has on GitHub. Growth - month over month growth in stars.
Activity is a relative number indicating how actively a project is being developed. Recent commits have higher weight than older ones.
For example, an activity of 9.0 indicates that a project is amongst the top 10% of the most actively developed projects that we are tracking.
depthai
-
helpful pointers to state-of-the-art material for depth estimation from multi-view videos captured from cameras with arbitrary poses.
ah, good call-out. Luxonis' DepthAI has solid resources for this here: https://github.com/luxonis/depthai/issues/173 DepthAI runs neural inference on stereo to produce a depth map (among doing other things too)
colmap
- Magic123: One Image to High-Quality 3D Object Generation
-
Drone mapping is pretty dang cool
Not saying its easy to use, but there is an application gui and it is free: https://github.com/colmap/colmap
-
Import many photogrammetry software's scenes into Blender
Colmap (Model folders (BIN and TXT), dense workspaces, NVM, PLY)
- Best options for monocular reconstruction?
-
improving camera pose estimation using multiple aruco markers
See colmap for example https://colmap.github.io/
-
2D images to 3D Object reconstruction
You're looking into a problem called photogrammetry, and a well-studied one at that. I'd recommend looking into "shape from motion" (sfm); specifically techniques that do "dense reconstruction." I'd recommend COLMAP to start with. It does pose estimation from images (e.g. you point it at a bunch of images and it will figure out the relative poses of the cameras that took them), as well as sparse and dense reconstcution.
-
Framework generate 3d meshes from camera images
COLMAP builds dense meshes from a collection of cameras https://colmap.github.io/
- Nerfstudio: A collaboration friendly studio for NeRFs
-
Neural Radiance Fields and input shape
I’ve seen references to using COLMAP (https://colmap.github.io/) to estimate camera position/pose, e.g. here
-
3D reconstruction of an object from videos/few images
Classical photogrammetry, where I agree with u/tdgros that the way to go is https://colmap.github.io/. There are actually better variants in literature but nothing is more reliable and user-friendly than COLMAP. This will give you a very precise point cloud, that can be meshed if needed.
What are some alternatives?
instant-ngp - Instant neural graphics primitives: lightning fast NeRF and more
Meshroom - 3D Reconstruction Software
nerf-pytorch - A PyTorch implementation of NeRF (Neural Radiance Fields) that reproduces the results.
OpenMVG (open Multiple View Geometry) - open Multiple View Geometry library. Basis for 3D computer vision and Structure from Motion.
Hierarchical-Localization - Visual localization made easy with hloc
BundleFusion - [Siggraph 2017] BundleFusion: Real-time Globally Consistent 3D Reconstruction using Online Surface Re-integration
nerf - Code release for NeRF (Neural Radiance Fields)
openMVS - open Multi-View Stereo reconstruction library
OpenSfM - Open source Structure-from-Motion pipeline
gtsfm - End-to-end SFM framework based on GTSAM