Haruhi-Suzumiya-3D-School
nerfstudio
Haruhi-Suzumiya-3D-School | nerfstudio | |
---|---|---|
4 | 10 | |
15 | 8,605 | |
- | 3.5% | |
2.4 | 9.6 | |
8 months ago | 6 days ago | |
Python | Python | |
Creative Commons Zero v1.0 Universal | Apache License 2.0 |
Stars - the number of stars that a project has on GitHub. Growth - month over month growth in stars.
Activity is a relative number indicating how actively a project is being developed. Recent commits have higher weight than older ones.
For example, an activity of 9.0 indicates that a project is amongst the top 10% of the most actively developed projects that we are tracking.
Haruhi-Suzumiya-3D-School
- Recreating the real life school from the anime "The Melancholy of Haruhi Suzumiya" in a 1:1 scale, open source
- Recreation of Haruhi Suzumiyas school in 3D 1:1 open source now!
- Made the Haruhi Suzumiya School in Blender, free on github
-
Now opensource development, (available for download), thanks for the help 5 months ago.
Github Download: https://github.com/SquirrelModeller/Haruhi-Suzumiya-3D-School/releases/tag/0.4
nerfstudio
-
Smerf: Streamable Memory Efficient Radiance Fields
You’re under the right paper for doing this. Instead of one big model, they have several smaller ones for regions in the scene. This way rendering is fast for large scenes.
This is similar to Block-NeRF [0], in their project page they show some videos of what you’re asking.
As for an easy way of doing this, nothing out-of-the-box. You can keep an eye on nerfstudio [1], and if you feel brave you could implement this paper and make a PR!
[0] https://waymo.com/intl/es/research/block-nerf/
[1] https://github.com/nerfstudio-project/nerfstudio
- Researchers create open-source platform for Neural Radiance Field development
-
first attempt to photogrammetry using DJI mini 2 and metashape. 460 images manual. What did I do wrong? What can i do to improve it? Would appreciate all kinds of advice to a newbie
Try rendering NERFs with your footage, you're gonna love the result and NERFs are pretty robust to reflections. You can use your Metashape solve for Nerf Studio https://github.com/nerfstudio-project/nerfstudio
-
What is the best way to create a dataset for NeRF?
Beyond these tips, I don't have much. There's lots of research about how to improve quality of solves in the software itself. I'm hoping these get added to instant-ngp, since it's fast and free, but it is research software, not a product, so we'll see. Another thing to maybe look at is Nerfstudio. It can use instant-ngp as a solver, but there are other solvers. I briefly tried it but couldn't figure out how it worked, from the small bit of time I spent with it. I hope to get back to it.
- Nerfstudio – A collaboration friendly studio for NeRFs
- When the client's management is happy but their dev team is a pain
- A collaboration friendly studio for NeRFs
-
NeRF ➜ point cloud export — now available via nerfstudio
nerf.studio | github | discord
- Show HN: A collaboration friendly studio for NeRFs
What are some alternatives?
BlenderProc - A procedural Blender pipeline for photorealistic training image generation
multinerf - A Code Release for Mip-NeRF 360, Ref-NeRF, and RawNeRF
BlendLuxCore - Blender Integration for LuxCore
TorchSharp - A .NET library that provides access to the library that powers PyTorch.
pyrender - Easy-to-use glTF 2.0-compliant OpenGL renderer for visualization of 3D scenes.
sdfstudio - A Unified Framework for Surface Reconstruction
vedo - A python module for scientific analysis of 3D data based on VTK and Numpy
smerf-3d
pyntcloud - pyntcloud is a Python library for working with 3D point clouds.
vision_transformer
kaolin-wisp - NVIDIA Kaolin Wisp is a PyTorch library powered by NVIDIA Kaolin Core to work with neural fields (including NeRFs, NGLOD, instant-ngp and VQAD).
CIPS-3D - 3D-aware GANs based on NeRF (arXiv).