filmulator-gui
darktable
Our great sponsors
filmulator-gui | darktable | |
---|---|---|
19 | 389 | |
659 | 8,792 | |
- | 1.7% | |
0.0 | 10.0 | |
about 1 month ago | 4 days ago | |
C++ | C | |
GNU General Public License v3.0 or later | GNU General Public License v3.0 only |
Stars - the number of stars that a project has on GitHub. Growth - month over month growth in stars.
Activity is a relative number indicating how actively a project is being developed. Recent commits have higher weight than older ones.
For example, an activity of 9.0 indicates that a project is amongst the top 10% of the most actively developed projects that we are tracking.
filmulator-gui
-
The Virtual Blender Camera
Let's look at this from a perspective of Shannon's Information Theory. Cinema is a double tranmissive system. First, the world has things & shapes: it is information. It transmit / sends information about itself via light, which bounces off it and scatters or bounces. This travels through first an air/liquid/vacuum medium (distorting in some cases) and then the lens's optical medium. Then it impacts either a shutter (blocking the light) or if the shutter is open a frame of film, which is actually a lot of independent little film grains on a transmissive medium. Ok, we have now received the information, and the shutter closes and advances to the next frame, to repeat another reception.
Film is kind of interesting because the process of getting the information isn't done there. We also have to re-broadcast the film out, but honestly, that part is kind of boring: shine light through the developed film and it attenuates some parts of the light more than others, reproducing the information encoded on developed film quite directly & without loss.
So far, this has all been modelled pretty well by this project. We have fancy lens optics, reproducing the light-capture system of a camera. What's missing / un-canny valley so far is that the virtual world is usually a fairly poor facimile of the real world. The modelling straight up isn't as good. How things animate and move lack a subtlty of complex motion that real bodies in motion carry. There's a host of small issues around how light interacts/bounces off subjects that we don't model well in Blender or most systems: subsurface scattering effects aren't as fancy as they could be, the physical based rendering models aren't complex enough, the air itself as as a medium isn't well modelled. There's a huge combo of things the virtual worlds aren't as good at as the real world, and there's so many behaviors and nuances of things in the real world that virtual worlds usually don't capture as well. This largely defines the uncanny valley.
But, just to throw a little more fuel on the fire: this project also is missing another step in cinema that I skipped above. I don't think this is where the uncanny valley problem is, but I think it's a pretty sizable difference between film and digital cinema. Film has another tranmission process that I didn't describe above!
So, we've shot our movie. Now what? Well, we develop the film. What is developing? Well, we emerse the film in an activation bath to develop the exposed silver-halide crystals better known as film grains. There's information trapped in these crystals, they're at a certain state, and we have a chemical process which sends this information out, through a medium. The medium is the chemical developer, which turns the exposure into developed film grain, which is the received information from this system.
One of the really crazy things to me is that developing film is not at all like reading exposure values off a digital sensor. Because the process happens over time chemically, and the process itself is actively consuming the film developer as it works, which creates little local pockets where there's less developer. The process is non-linear. A heavily exposed scene will consume the developer and reduce further development speed not just for that film grain, but for the area around it.
Again, this isn't the uncanny valley problem. But it's still something missing from digital cinema, from this effort, that makes it substantially different from film cinema. There's projects like Filmulator https://filmulator.org/ that I love and adore which can simulate chemical development of film from RAW images. I'd love to see Virtual Blender Camera team up with efforts like these, to create a more genuine film-cinema feel, that models more than just the optical capture systems.
-
Make Your Renders Unnecessarily Complicated by Modeling a Film Camera in Blender [video]
I'd also (re-)add: film is just one part of a transmission process.
Film has to be developed into something. And that's a chemical process, which is non-linear. Developer, the bath you put film in to activate the still blank but exposed reel, to turn the grains into actual "developed" photo, is a complex analog process. "Developer" is expended while developing film & becomes less effective at developing, creating a much stronger local contrast across pictures in a natural chemical way.
There's a pretty complex Shannon Information Theory system going on here, which I'm not certain how to model. There's maybe a information->transmit->medium->receive->information model between the scene and the film. Then an entirely separate information->transmit->medium->recieve->information model between the undeveloped scene and what actually shows up when you "develop" the film.
As you say, there are quite a variety of film types with different behaviors. https://github.com/t3mujin/t3mujinpack is set of Darktable presets to emulate various types of film. But the behavior of the film is still only half of the process. As I said in my previous post, developing the film is a complex chemical process, with lots of local effects for different parts of the image. There's enormous power here. https://filmulator.org/ is an epic project, that, in my view, is incredibly applicable to almost all modern digital photography, that could help us so much, to move beyond raw data & help us appreciate scenes more naturally. It's not "correct" but my personal view is the aesthetic is much better, and it somewhat represents what the human eye does anyways, with it's incredible ability to comprehend & view dynamic range.
-
Show HN: Filmbox, physically accurate film emulation, now on Linux and Windows
How does this compare to my Filmulator, which basically runs a simulation of stand development?
https://filmulator.org
(I've been too busy on another project to dedicate too much time to it the past year, and dealing with Windows CI sucks the fun out of everything, so it hasn't been updated in a while…)
-
Film Photography is Still a Great Option.
She's Got The Look! Many people spend so much time trying to make their digital photos look like film (and massive props to /u/CarVac for his development of Filmulator because it's awesome), but with film that's effortless and automatic. Want to make your photos look like they were shot on Ektar? Use Ektar. Portra? Use Portra. And Velvia, and Provia and Cinestill, and so on.
-
Darktable 4.0.0 Released
> I don't want to do elaborate stuff like working with masks / applying filters to sections of the photo only. Only thing I usually do is increase saturation, and, rarely, brightness/aperture.
I don't think you're the intended audience for darktable. Try https://filmulator.org/
- Ask HN: Is there a chemical darkroom emulator for Linux
- [HUB] Can Ryzen 6000 Beat Intel Alder Lake? - AMD Ryzen 9 6900HS Review
-
What is the best non-subscription photo editor?
There's a list in the FAQ. I try to stick to free and open-source software. Darktable, RawTherapee, and Filmulator have varying levels of complexity.
- How impactful is free and open source software development?
-
Looking for good editing software
Shameless self-plug: https://filmulator.org/
darktable
- Darktable: Open-source photography workflow application and RAW developer
-
Vienna with the GR III and IIIx
There's also darktable which is open source but rather .. involved. It's incredibly powerful but has a steep learning curve. https://www.darktable.org/
- Software Advice Needed
-
Darktable: Crashing into the Wall in Slow-Motion
> the while loop of death (source: https://github.com/darktable-org/darktable/blob/darktable-4....)
shudder
Yeah, I too wouldn't want to volunteer to contribute to a project which is OK with this.
-
RAW image editor for Mac
https://www.darktable.org/ or https://www.rawtherapee.com/ both free, open source and cross platform
-
Ansel
Author has a blog post here, https://ansel.photos/en/news/darktable-dans-le-mur-au-ralent..., which exhibits some example code, for example, this: https://github.com/darktable-org/darktable/blob/darktable-4....
That's a far cry from what I'd find acceptable in any project.
- Retroactive: Run Aperture, iPhoto and iTunes on macOS Ventura, Monterey, Big Sur
- Ask HN: What are some self-hosted photo organizing/sharing programs?
-
A collection of useful Mac Apps
Darktable - Price: Free Free and open-source photo editing software for Mac that features advanced editing tools and a user-friendly interface.
-
Analysis paralysis - need advice
The base M2 is plenty cheap and pack a lot of punch in it, it flies through my RAW edits in darktable (https://www.darktable.org/) even with just 8GB of RAM. (Granted I don't have much things else running when using darktable).
What are some alternatives?
sosumi-snap
RawTherapee - A powerful cross-platform raw photo processing program
photostructure-for-servers - PhotoStructure is your new home for all your photos and videos. Installation should only take a couple minutes.
ansel - A darktable fork minus the bloat plus some design vision.
davinci-resolve-linux - Setup Davinci Resolve on Linux an Fix Issues with Importing and Exporting Media
vello - An experimental GPU compute-centric 2D renderer.
czkawka - Multi functional app to find duplicates, empty folders, similar images etc.
wallpapers - Wallpapers for Pop!_OS
exiftool - ExifTool meta information reader/writer
dnglab - Camera RAW to DNG file format converter
avif - THIS PROJECT HAS MOVED: https://github.com/AOMediaCodec/libavif