Get Scout setup in minutes, and let us sweat the small stuff. A couple lines in settings.py is all you need to start monitoring your apps. Sign up for our free tier today. Learn more →
Top 23 Python Rendering Projects
-
-
Scout Monitoring
Free Django app performance insights with Scout Monitoring. Get Scout setup in minutes, and let us sweat the small stuff. A couple lines in settings.py is all you need to start monitoring your apps. Sign up for our free tier today.
-
-
-
-
-
Photogrammetry-Guide
Photogrammetry Guide. Photogrammetry is widely used for Aerial surveying, Agriculture, Architecture, 3D Games, Robotics, Archaeology, Construction, Emergency management, and Medical.
-
Project mention: Interstellar movie is implemented with Einstein's equations in 40k lines C++ | news.ycombinator.com | 2024-08-14
stretches fingers
So, the "farm" (the name given to all the machines that render everything) had 36k CPUs. I can't remember what the specs of the machines were but I think they were either 8 or ten core CPUs. Most of them were blade units, because that was the densest way to fit in that many CPUs into that sized space (The farm lived in the basement and consumed something like half a megawatt, I can't remember if that included aircon or not.)
Now, each machine on the farm was split into slots. From memory, the biggest slot was 8 cores, but you could request less.
THe farm ran "jobs" which were lots of commands strung together into a "direct acyclical graph" (DAG) for short. A node in the job could be as simple as "cd /show && mkdir dave/" or it could be a render. EAch stage in the job could have a dependency, either on a physical property, like amount of ram, or machine class (some CPUs were newer than others) a license to run renderman (a renderer from pixar) or some other expensive bit of software. It could also be dependent on a previous stage completing (so frame 44 can't render before frame 20 because it needs to reference something that frame 20 generates.)
All these commands are parcelled up into a single lump, using a "job description language" and sent to the scheduler.
Its the scheduler that works out where and when to place a command, on which machine. Now, the system that they used at the time was called alfred. The thing you need to know about alfred is that it's interface was written in something that looks like the athena widget set: http://appartager.free.fr/renderman/prman%2012.5/programming...
Alfred is old, as in, Single threaded, older than SSH old. The man page dates from 1995, and I suspect that its probably older still by a good 5 years.
However, despite being old, its still fast. It can dispatch jobs way quicker than k8s, even on an old shitty machine. But, we were pushing it a bit. I think we were sending something like 30k commands an hour through the thing. (ie, telling a machine to run a command, store the logs, capture the return code, pre-empt, reap, all that kinda jazz). We did have to run it on an overclocked workstation, as the main VM cluster wasn't quite fast enough in single threaded performance to keep up with demand.
WE had something like 800 artists in the building, all using the quaint athena interface.
There was a cgroups wrapper that was written to make sure that people couldn't take more ram than was allotted. We over subscribed CPU by something like 10-20%. If you went over your ram allocation, you'd get OOM'd. swapping ram between processes is expensive, swapping CPU is pretty much free (its not, but the penalty for running at 110% CPU is way less than paying the electricity for having more machines and undersubscribing.)
So why not K8s?
I'm sure some people do use it. But its not practical for batch processing like this for a number of reasons:
1) scaling past 500 nodes means that you loose a lot of network to message passing and state transfer
2) the scheduler isn't designed to have complex dependency trees (by default you can have a sidecar and thats about it really. You can create a service, but thats not really designed for ephemeral tasks)
3) the networking is batshit. (virtual networking is really not great for low latency high throughput stuff like NFS or some other file protocol)
What can you use?
If you're on AWS, Batch is good enough. Its not as fast, but it'll do. You'll need to write an interface to make complex job graphs though.
Azure has a batch interface as well.
https://www.opencue.io/ is what a lot of people use. And some people use https://renderman.pixar.com/tractor
-
InfluxDB
Purpose built for real-time analytics at any scale. InfluxDB Platform is powered by columnar analytics, optimized for cost-efficient storage, and built with open data standards.
-
-
RadeonProRenderBlenderAddon
This hardware-agnostic rendering plug-in for Blender uses accurate ray-tracing technology to produce images and animations of your scenes, and provides real-time interactive rendering and continuous adjustment of effects.
-
Python-Raytracer
A basic Ray Tracer that exploits numpy arrays and functions to work reasonably fast.
-
Project mention: Gigi: Rapid prototyping and development of real-time rendering techniques | news.ycombinator.com | 2024-09-06
-
-
BlenderUSDHydraAddon
This add-on allows you to assemble and compose USD data with Blender data and render it all using various renderers via Hydra.
-
-
2dimageto3dmodel
We evaluate our method on different datasets (including ShapeNet, CUB-200-2011, and Pascal3D+) and achieve state-of-the-art results, outperforming all the other supervised and unsupervised methods and 3D representations, all in terms of performance, accuracy, and training time.
-
https://github.com/360macky/generative-manim :
> Generative Manim is a prototype of a web app that uses GPT-4 to generate videos with Manim. The idea behind this project is taking advantage of the power of GPT-4 in programming, the understanding of human language and the animation capabilities of Manim to generate a tool that could be used by anyone to create videos. Regardless of their programming or video editing skills.
"TheoremQA: A Theorem-driven [STEM] Question Answering dataset" (2023) https://github.com/wenhuchen/TheoremQA#leaderboard
How do you score memory retention and video watching comprehension? The classic educators' optimization challenge
"Khan Academy’s 7-Step Approach to Prompt Engineering for Khanmigo"
-
-
-
-
-
-
-
SaaSHub
SaaSHub - Software Alternatives and Reviews. SaaSHub helps you find the best software and product alternatives
Python Rendering discussion
Python Rendering related posts
-
Gigi: Rapid prototyping and development of real-time rendering techniques
-
Summing Blue Noise Octaves Like Perlin Noise
-
Gigi: Rapid prototyping and development of real-time rendering techniques
-
Framework for rapid prototyping and development of realtime rendering techniques
-
Blender Game Engine's
-
Defold: Open-source Lua game engine with console support
-
Why should I prefer zengl over moderngl?
-
A note from our sponsor - Scout Monitoring
www.scoutapm.com | 18 Sep 2024
Index
What are some of the best open-source Rendering projects in Python? This list will help you:
Project | Stars | |
---|---|---|
1 | armory | 3,050 |
2 | BlenderProc | 2,727 |
3 | nerfacc | 1,381 |
4 | pyrender | 1,286 |
5 | glumpy | 1,233 |
6 | Photogrammetry-Guide | 1,123 |
7 | OpenCue | 824 |
8 | TouchDesigner_Shared | 771 |
9 | objmc | 509 |
10 | RadeonProRenderBlenderAddon | 482 |
11 | Python-Raytracer | 465 |
12 | gigi | 437 |
13 | rd-blender-docker | 426 |
14 | BlenderUSDHydraAddon | 362 |
15 | taichi-ngp-renderer | 362 |
16 | 2dimageto3dmodel | 269 |
17 | generative-manim | 263 |
18 | neural-deferred-shading | 251 |
19 | pymadcad | 210 |
20 | zengl | 172 |
21 | ai_upscaler_for_blender | 58 |
22 | blender-renderborder | 32 |
23 | skia-animations | 1 |