filmulator-gui VS vello

Compare filmulator-gui vs vello and see what are their differences.

filmulator-gui

Filmulator --- Simplified raw editing with the power of film (by CarVac)

vello

An experimental GPU compute-centric 2D renderer. (by linebender)
InfluxDB - Power Real-Time Data Analytics at Scale
Get real-time insights from all types of time series data with InfluxDB. Ingest, query, and analyze billions of data points in real-time with unbounded cardinality.
www.influxdata.com
featured
SaaSHub - Software Alternatives and Reviews
SaaSHub helps you find the best software and product alternatives
www.saashub.com
featured
filmulator-gui vello
19 31
659 1,945
- 3.7%
0.0 9.4
about 2 months ago 3 days ago
C++ Rust
GNU General Public License v3.0 or later Apache License 2.0
The number of mentions indicates the total number of mentions that we've tracked plus the number of user suggested alternatives.
Stars - the number of stars that a project has on GitHub. Growth - month over month growth in stars.
Activity is a relative number indicating how actively a project is being developed. Recent commits have higher weight than older ones.
For example, an activity of 9.0 indicates that a project is amongst the top 10% of the most actively developed projects that we are tracking.

filmulator-gui

Posts with mentions or reviews of filmulator-gui. We have used some of these posts to build our list of alternatives and similar projects. The last one was on 2023-07-02.
  • The Virtual Blender Camera
    1 project | news.ycombinator.com | 16 Aug 2023
    Let's look at this from a perspective of Shannon's Information Theory. Cinema is a double tranmissive system. First, the world has things & shapes: it is information. It transmit / sends information about itself via light, which bounces off it and scatters or bounces. This travels through first an air/liquid/vacuum medium (distorting in some cases) and then the lens's optical medium. Then it impacts either a shutter (blocking the light) or if the shutter is open a frame of film, which is actually a lot of independent little film grains on a transmissive medium. Ok, we have now received the information, and the shutter closes and advances to the next frame, to repeat another reception.

    Film is kind of interesting because the process of getting the information isn't done there. We also have to re-broadcast the film out, but honestly, that part is kind of boring: shine light through the developed film and it attenuates some parts of the light more than others, reproducing the information encoded on developed film quite directly & without loss.

    So far, this has all been modelled pretty well by this project. We have fancy lens optics, reproducing the light-capture system of a camera. What's missing / un-canny valley so far is that the virtual world is usually a fairly poor facimile of the real world. The modelling straight up isn't as good. How things animate and move lack a subtlty of complex motion that real bodies in motion carry. There's a host of small issues around how light interacts/bounces off subjects that we don't model well in Blender or most systems: subsurface scattering effects aren't as fancy as they could be, the physical based rendering models aren't complex enough, the air itself as as a medium isn't well modelled. There's a huge combo of things the virtual worlds aren't as good at as the real world, and there's so many behaviors and nuances of things in the real world that virtual worlds usually don't capture as well. This largely defines the uncanny valley.

    But, just to throw a little more fuel on the fire: this project also is missing another step in cinema that I skipped above. I don't think this is where the uncanny valley problem is, but I think it's a pretty sizable difference between film and digital cinema. Film has another tranmission process that I didn't describe above!

    So, we've shot our movie. Now what? Well, we develop the film. What is developing? Well, we emerse the film in an activation bath to develop the exposed silver-halide crystals better known as film grains. There's information trapped in these crystals, they're at a certain state, and we have a chemical process which sends this information out, through a medium. The medium is the chemical developer, which turns the exposure into developed film grain, which is the received information from this system.

    One of the really crazy things to me is that developing film is not at all like reading exposure values off a digital sensor. Because the process happens over time chemically, and the process itself is actively consuming the film developer as it works, which creates little local pockets where there's less developer. The process is non-linear. A heavily exposed scene will consume the developer and reduce further development speed not just for that film grain, but for the area around it.

    Again, this isn't the uncanny valley problem. But it's still something missing from digital cinema, from this effort, that makes it substantially different from film cinema. There's projects like Filmulator https://filmulator.org/ that I love and adore which can simulate chemical development of film from RAW images. I'd love to see Virtual Blender Camera team up with efforts like these, to create a more genuine film-cinema feel, that models more than just the optical capture systems.

  • Make Your Renders Unnecessarily Complicated by Modeling a Film Camera in Blender [video]
    3 projects | news.ycombinator.com | 2 Jul 2023
    I'd also (re-)add: film is just one part of a transmission process.

    Film has to be developed into something. And that's a chemical process, which is non-linear. Developer, the bath you put film in to activate the still blank but exposed reel, to turn the grains into actual "developed" photo, is a complex analog process. "Developer" is expended while developing film & becomes less effective at developing, creating a much stronger local contrast across pictures in a natural chemical way.

    There's a pretty complex Shannon Information Theory system going on here, which I'm not certain how to model. There's maybe a information->transmit->medium->receive->information model between the scene and the film. Then an entirely separate information->transmit->medium->recieve->information model between the undeveloped scene and what actually shows up when you "develop" the film.

    As you say, there are quite a variety of film types with different behaviors. https://github.com/t3mujin/t3mujinpack is set of Darktable presets to emulate various types of film. But the behavior of the film is still only half of the process. As I said in my previous post, developing the film is a complex chemical process, with lots of local effects for different parts of the image. There's enormous power here. https://filmulator.org/ is an epic project, that, in my view, is incredibly applicable to almost all modern digital photography, that could help us so much, to move beyond raw data & help us appreciate scenes more naturally. It's not "correct" but my personal view is the aesthetic is much better, and it somewhat represents what the human eye does anyways, with it's incredible ability to comprehend & view dynamic range.

  • Show HN: Filmbox, physically accurate film emulation, now on Linux and Windows
    2 projects | news.ycombinator.com | 8 Feb 2023
    How does this compare to my Filmulator, which basically runs a simulation of stand development?

    https://filmulator.org

    (I've been too busy on another project to dedicate too much time to it the past year, and dealing with Windows CI sucks the fun out of everything, so it hasn't been updated in a while…)

  • Film Photography is Still a Great Option.
    1 project | /r/photography | 17 Sep 2022
    She's Got The Look! Many people spend so much time trying to make their digital photos look like film (and massive props to /u/CarVac for his development of Filmulator because it's awesome), but with film that's effortless and automatic. Want to make your photos look like they were shot on Ektar? Use Ektar. Portra? Use Portra. And Velvia, and Provia and Cinestill, and so on.
  • Darktable 4.0.0 Released
    2 projects | news.ycombinator.com | 2 Jul 2022
    > I don't want to do elaborate stuff like working with masks / applying filters to sections of the photo only. Only thing I usually do is increase saturation, and, rarely, brightness/aperture.

    I don't think you're the intended audience for darktable. Try https://filmulator.org/

  • Ask HN: Is there a chemical darkroom emulator for Linux
    1 project | news.ycombinator.com | 26 Apr 2022
  • [HUB] Can Ryzen 6000 Beat Intel Alder Lake? - AMD Ryzen 9 6900HS Review
    1 project | /r/hardware | 24 Feb 2022
  • What is the best non-subscription photo editor?
    2 projects | /r/photography | 18 Jan 2022
    There's a list in the FAQ. I try to stick to free and open-source software. Darktable, RawTherapee, and Filmulator have varying levels of complexity.
  • How impactful is free and open source software development?
    4 projects | /r/slatestarcodex | 14 Oct 2021
  • Looking for good editing software
    1 project | /r/EditMyRaw | 28 Jun 2021
    Shameless self-plug: https://filmulator.org/

vello

Posts with mentions or reviews of vello. We have used some of these posts to build our list of alternatives and similar projects. The last one was on 2024-03-20.
  • Rive Renderer – now open source and available on all platforms
    7 projects | news.ycombinator.com | 20 Mar 2024
    I'm looking forward to doing careful benchmarking, as this renderer absolutely looks like it will be competitive. It turns out that is really hard to do, if you want meaningful results.

    My initial take is that performance will be pretty dependent on hardware, in particular support for pixel local storage[1]. From what I've seen so far, Apple Silicon is the sweet spot, as there is hardware support for binning and sorting to tiles, and then asking for fragment shader execution to be serialized within a tile works well. On other hardware, I expect the cost of serializing those invocations to be much higher.

    One reason we haven't done deep benchmarking on the Vello side is that our performance story is far from done. We know one current issue is the use of device atomics for aggregating bounding boxes. We have a prototype implementation [2] that uses monoids for segmented reduction. Additionally, we plan to do f16 math (which should be a major win especially on mobile), as well as using subgroups for various prefix sum steps (subgroups are in the process of landing in WebGPU[3]).

    Overall, I'm thrilled to see this released as open source, and that there's so much activity in fast GPU vector graphics rendering. I'd love to see a future in which CPU path rendering is seen as being behind the times, and this moves us closer to that future.

    [1]: https://dawn.googlesource.com/dawn/+/refs/heads/main/docs/da...

    [2]: https://github.com/linebender/vello/issues/259

    [3]: https://github.com/gpuweb/gpuweb/issues/4306

  • WebKit Switching to Skia for 2D Graphics Rendering
    6 projects | news.ycombinator.com | 20 Feb 2024
  • Looking for this. html + css rendering through wgpu.
    14 projects | /r/rust | 3 Jul 2023
    Dioxus is working on this with blitz. It's leveraging wgpu through the linebender group's Vello renderer. Still in early stages.
  • A note on Metal shader converter
    2 projects | news.ycombinator.com | 12 Jun 2023
    If you're doing advanced compute work (including lock-free data structures), then it's best effort.

    https://github.com/linebender/vello/issues/42 is an issue from when Vello (then piet-gpu) had a single-pass prefix sum algorithm. Looking back, I'm fairly confident that it's a shader translation issue and that it wouldn't work with MoltenVK either, but we stopped investigating when we moved to a more robustly portable approach.

  • Vello: An experimental WebGPU-based compute-centric 2D renderer in Rust
    1 project | news.ycombinator.com | 23 Apr 2023
  • XUL Layout has been removed from Firefox
    18 projects | news.ycombinator.com | 1 Apr 2023
    There are a number of up-and-coming Rust-based frameworks in this niche:

    - https://github.com/iced-rs/iced (probably the most usable today)

    - https://github.com/vizia/vizia

    - https://github.com/marc2332/freya

    - https://github.com/linebender/xilem (currently very incomplete but exciting because it's from a team with a strong track record)

    What is also exciting to me is that the Rust GUI ecosystem is in many cases building itself up with modular libraries. So while we have umpteen competing frameworks they are to a large degree all building and collaborating on the same foundations. For example, we have:

    - https://github.com/rust-windowing/winit (cross-platform window creation)

    - https://github.com/gfx-rs/wgpu (abstraction on top of vulkan/metal/dx12)

    - https://github.com/linebender/vello (a canvas like imperative drawing API on top of wgpu)

    - https://github.com/DioxusLabs/taffy (UI layout algorithms)

    - https://github.com/pop-os/cosmic-text (text rendering and editing)

    - https://github.com/AccessKit/accesskit (cross-platform accessibility APIs)

    In many cases there a see https://blessed.rs/crates#section-graphics-subsection-gui for a more complete list of frameworks and foundational libraries)

  • Drawing and Annotation in Rust
    1 project | /r/rust | 8 Mar 2023
    blessed.rs lists these three crates for 2D drawing: - https://lib.rs/crates/femtovg - https://lib.rs/crates/skia-safe - https://github.com/linebender/vello
  • Recommended UI framework to draw many 2D lines?
    5 projects | /r/rust | 6 Mar 2023
    Vello (https://github.com/linebender/vello) which uses wgpu to render Edit: just saw you require images. Vello doesn't support those yet
  • Announcing piet-glow, a GL-based implementation of Piet for 2D rendering
    3 projects | /r/rust | 6 Mar 2023
    How does this relate to Vello? Both target raw-window-handle for winit compatibility. Vello uses WGPU vs piet-glow using GL.
  • Is WGPU actually a good idea yet?
    1 project | /r/rust_gamedev | 1 Mar 2023
    Finally, maybe vello could help you with ideas. It's not production ready yet, but they have some interesting ideas for 2D rendering using wgpu.

What are some alternatives?

When comparing filmulator-gui and vello you can also consider the following projects:

sosumi-snap

nanovg - Antialiased 2D vector drawing library on top of OpenGL for UI and visualizations.

photostructure-for-servers - PhotoStructure is your new home for all your photos and videos. Installation should only take a couple minutes.

msdfgen - Multi-channel signed distance field generator

RawTherapee - A powerful cross-platform raw photo processing program

Vrmac - Vrmac Graphics, a cross-platform graphics library for .NET. Supports 3D, 2D, and accelerated video playback. Works on Windows 10 and Raspberry Pi4.

darktable - darktable is an open source photography workflow application and raw developer

troika - A JavaScript framework for interactive 3D and 2D visualizations

wallpapers - Wallpapers for Pop!_OS

tinyraytracer - A brief computer graphics / rendering course

dnglab - Camera RAW to DNG file format converter

gpuweb - Where the GPU for the Web work happens!