-
gaussian-splatting
Original reference implementation of "3D Gaussian Splatting for Real-Time Radiance Field Rendering"
-
InfluxDB
Power Real-Time Data Analytics at Scale. Get real-time insights from all types of time series data with InfluxDB. Ingest, query, and analyze billions of data points in real-time with unbounded cardinality.
Chris' post doesn't really give much background info, so here's what's going on here and why it's awesome.
Real-time 3D rendering has historically been based on rasterisation of polygons. This has brought us a long way and has a lot of advantages, but making photorealistic scenes takes a lot of work from the artist. You can scan real objects like photogrammetry and then convert to high poly meshes, but photogrammetry rigs are pro-level tools, and the assets won't render at real time speeds. Unreal 5 introduced Nanite which is a very advanced LoD algorithm and that helps a lot, but again, we seem to be hitting the limits of what can be done with polygon based rendering.
3D Gaussian Splats is a new AI based technique that lets you render in real-time photorealistic 3D scenes that were captured with only a few photos taken using normal cameras. It replaces polygon based rendering with radiance fields.
https://repo-sam.inria.fr/fungraph/3d-gaussian-splatting/
3DGS uses several advanced techniques:
1. A 3D point cloud is estimated by using "structure in motion" techniques.
2. The points are turned into "3D gaussians", which are sort of floating blobs of light where each one has a position, opacity and a covariance matrix defined using "spherical harmonics" (no me neither). They're ellipsoids so can be thought of as spheres that are stretched and rotated.
3. Rendering is done via a form of ray-tracing in which the 3D Gaussians are projected to the 2D screen (into "splats"), sorted so transparency works and then rasterized on the fly using custom shaders.
The neural network isn't actually used at rendering time, so GPUs can render the scene nice and fast.
In terms of what it can do the technique might be similar to Unreal's Nanite. Both are designed for static scenes. Whilst 3D Gaussians can be moved around on the fly, so the scene can be changed in principle, none of the existing animation, game engines or artwork packages know what to do without polygons. But this sort of thing could be used to rapidly create VR worlds based on only videos taken from different angles, which seems useful.
Ahh, I'm unlikely to publish any articles, sorry! But Aras has a great Unity renderer for non-VR platforms, if you want to play around with an existing implementation. (It runs incredibly well on my Mac and PC.)
https://github.com/aras-p/UnityGaussianSplatting
Related posts
-
Bad accuracy after model training, Can someone help me ?
-
The initial work to get the gaussian-splatting training code working on AMD/ROCm has been done
-
Future Tech SD VR, 3D modeling, Movies, and Video Game Creation (Paper / Videos included)
-
Show HN: Real-Time 3D Gaussian Splatting in WebGL
-
Any work on Style transfer using Stable Diffusion based on image-mask pairs similar to Pix2Pix