filmulator-gui VS libjxl

Compare filmulator-gui vs libjxl and see what are their differences.

InfluxDB - Power Real-Time Data Analytics at Scale
Get real-time insights from all types of time series data with InfluxDB. Ingest, query, and analyze billions of data points in real-time with unbounded cardinality.
www.influxdata.com
featured
SaaSHub - Software Alternatives and Reviews
SaaSHub helps you find the best software and product alternatives
www.saashub.com
featured
filmulator-gui libjxl
19 85
659 2,236
- 29.1%
0.0 9.8
about 2 months ago 2 days ago
C++ C++
GNU General Public License v3.0 or later BSD 3-clause "New" or "Revised" License
The number of mentions indicates the total number of mentions that we've tracked plus the number of user suggested alternatives.
Stars - the number of stars that a project has on GitHub. Growth - month over month growth in stars.
Activity is a relative number indicating how actively a project is being developed. Recent commits have higher weight than older ones.
For example, an activity of 9.0 indicates that a project is amongst the top 10% of the most actively developed projects that we are tracking.

filmulator-gui

Posts with mentions or reviews of filmulator-gui. We have used some of these posts to build our list of alternatives and similar projects. The last one was on 2023-07-02.
  • The Virtual Blender Camera
    1 project | news.ycombinator.com | 16 Aug 2023
    Let's look at this from a perspective of Shannon's Information Theory. Cinema is a double tranmissive system. First, the world has things & shapes: it is information. It transmit / sends information about itself via light, which bounces off it and scatters or bounces. This travels through first an air/liquid/vacuum medium (distorting in some cases) and then the lens's optical medium. Then it impacts either a shutter (blocking the light) or if the shutter is open a frame of film, which is actually a lot of independent little film grains on a transmissive medium. Ok, we have now received the information, and the shutter closes and advances to the next frame, to repeat another reception.

    Film is kind of interesting because the process of getting the information isn't done there. We also have to re-broadcast the film out, but honestly, that part is kind of boring: shine light through the developed film and it attenuates some parts of the light more than others, reproducing the information encoded on developed film quite directly & without loss.

    So far, this has all been modelled pretty well by this project. We have fancy lens optics, reproducing the light-capture system of a camera. What's missing / un-canny valley so far is that the virtual world is usually a fairly poor facimile of the real world. The modelling straight up isn't as good. How things animate and move lack a subtlty of complex motion that real bodies in motion carry. There's a host of small issues around how light interacts/bounces off subjects that we don't model well in Blender or most systems: subsurface scattering effects aren't as fancy as they could be, the physical based rendering models aren't complex enough, the air itself as as a medium isn't well modelled. There's a huge combo of things the virtual worlds aren't as good at as the real world, and there's so many behaviors and nuances of things in the real world that virtual worlds usually don't capture as well. This largely defines the uncanny valley.

    But, just to throw a little more fuel on the fire: this project also is missing another step in cinema that I skipped above. I don't think this is where the uncanny valley problem is, but I think it's a pretty sizable difference between film and digital cinema. Film has another tranmission process that I didn't describe above!

    So, we've shot our movie. Now what? Well, we develop the film. What is developing? Well, we emerse the film in an activation bath to develop the exposed silver-halide crystals better known as film grains. There's information trapped in these crystals, they're at a certain state, and we have a chemical process which sends this information out, through a medium. The medium is the chemical developer, which turns the exposure into developed film grain, which is the received information from this system.

    One of the really crazy things to me is that developing film is not at all like reading exposure values off a digital sensor. Because the process happens over time chemically, and the process itself is actively consuming the film developer as it works, which creates little local pockets where there's less developer. The process is non-linear. A heavily exposed scene will consume the developer and reduce further development speed not just for that film grain, but for the area around it.

    Again, this isn't the uncanny valley problem. But it's still something missing from digital cinema, from this effort, that makes it substantially different from film cinema. There's projects like Filmulator https://filmulator.org/ that I love and adore which can simulate chemical development of film from RAW images. I'd love to see Virtual Blender Camera team up with efforts like these, to create a more genuine film-cinema feel, that models more than just the optical capture systems.

  • Make Your Renders Unnecessarily Complicated by Modeling a Film Camera in Blender [video]
    3 projects | news.ycombinator.com | 2 Jul 2023
    I'd also (re-)add: film is just one part of a transmission process.

    Film has to be developed into something. And that's a chemical process, which is non-linear. Developer, the bath you put film in to activate the still blank but exposed reel, to turn the grains into actual "developed" photo, is a complex analog process. "Developer" is expended while developing film & becomes less effective at developing, creating a much stronger local contrast across pictures in a natural chemical way.

    There's a pretty complex Shannon Information Theory system going on here, which I'm not certain how to model. There's maybe a information->transmit->medium->receive->information model between the scene and the film. Then an entirely separate information->transmit->medium->recieve->information model between the undeveloped scene and what actually shows up when you "develop" the film.

    As you say, there are quite a variety of film types with different behaviors. https://github.com/t3mujin/t3mujinpack is set of Darktable presets to emulate various types of film. But the behavior of the film is still only half of the process. As I said in my previous post, developing the film is a complex chemical process, with lots of local effects for different parts of the image. There's enormous power here. https://filmulator.org/ is an epic project, that, in my view, is incredibly applicable to almost all modern digital photography, that could help us so much, to move beyond raw data & help us appreciate scenes more naturally. It's not "correct" but my personal view is the aesthetic is much better, and it somewhat represents what the human eye does anyways, with it's incredible ability to comprehend & view dynamic range.

  • Show HN: Filmbox, physically accurate film emulation, now on Linux and Windows
    2 projects | news.ycombinator.com | 8 Feb 2023
    How does this compare to my Filmulator, which basically runs a simulation of stand development?

    https://filmulator.org

    (I've been too busy on another project to dedicate too much time to it the past year, and dealing with Windows CI sucks the fun out of everything, so it hasn't been updated in a while…)

  • Film Photography is Still a Great Option.
    1 project | /r/photography | 17 Sep 2022
    She's Got The Look! Many people spend so much time trying to make their digital photos look like film (and massive props to /u/CarVac for his development of Filmulator because it's awesome), but with film that's effortless and automatic. Want to make your photos look like they were shot on Ektar? Use Ektar. Portra? Use Portra. And Velvia, and Provia and Cinestill, and so on.
  • Darktable 4.0.0 Released
    2 projects | news.ycombinator.com | 2 Jul 2022
    > I don't want to do elaborate stuff like working with masks / applying filters to sections of the photo only. Only thing I usually do is increase saturation, and, rarely, brightness/aperture.

    I don't think you're the intended audience for darktable. Try https://filmulator.org/

  • Ask HN: Is there a chemical darkroom emulator for Linux
    1 project | news.ycombinator.com | 26 Apr 2022
  • [HUB] Can Ryzen 6000 Beat Intel Alder Lake? - AMD Ryzen 9 6900HS Review
    1 project | /r/hardware | 24 Feb 2022
  • What is the best non-subscription photo editor?
    2 projects | /r/photography | 18 Jan 2022
    There's a list in the FAQ. I try to stick to free and open-source software. Darktable, RawTherapee, and Filmulator have varying levels of complexity.
  • How impactful is free and open source software development?
    4 projects | /r/slatestarcodex | 14 Oct 2021
  • Looking for good editing software
    1 project | /r/EditMyRaw | 28 Jun 2021
    Shameless self-plug: https://filmulator.org/

libjxl

Posts with mentions or reviews of libjxl. We have used some of these posts to build our list of alternatives and similar projects. The last one was on 2024-05-02.
  • JPEG XL and Google's War Against It
    2 projects | news.ycombinator.com | 2 May 2024
    > Regarding JPEG XL's mobile support, it makes sense it would see limited development if the company that manages one of the biggest mobile players has been the greatest restriction on their success. The lack of support also disincentivises manufacturers to prioritise support.

    There was literally no involvement from any hardware vendor in the standardization of JPEG XL. It went from a Call for Proposals in Sept 2018 to Committee Draft in Aug 2019 with very little time for industry feedback. Contrast this with AV1 which had involvement from hardware vendors Intel, NVIDIA, Arm, AMD, Broadcom, Amlogic from the beginning as well as companies who ship media on hardware at scale such as Cisco, Netflix, Samsung and yes Google. These companies reviewed and provided significant feedback on the format that made it suitable for hardware implementation.

    https://news.ycombinator.com/threads?id=JyrkiAlakuijala is a lead on the project and a Google employee, and active in JPEG XL development https://github.com/libjxl/libjxl/commits?author=jyrkialakuij...

  • JPEG XL Reference Implementation
    1 project | news.ycombinator.com | 4 Apr 2024
  • JPEG XL and the Pareto Front
    9 projects | news.ycombinator.com | 1 Mar 2024
    https://github.com/libjxl/libjxl/blob/main/doc/format_overvi... is a pretty detailed but good overview. The highlights are variable size DCT (up to 128x128), ANS entropy prediction, and chroma from luminance prediction. https://github.com/libjxl/libjxl/blob/main/doc/encode_effort... also gives a good breakdown of features by effort level.
  • Compressing Text into Images
    4 projects | news.ycombinator.com | 14 Jan 2024
    For JPEG XL, refer to its format overview [1]. In short its lossless mode uses a combination of multiple techniques: the rANS coding with an alias table, LZ77, reversible color transforms, a general vector quantization that subsumes palettes, a modified Haar transform and a learnable meta-adaptive decision tree for context modelling.

    One good thing about JPEG XL is that its lossy mode also largely uses the same tool, with a major addition of specialized quantization and context modelling for low- and high-frequenty components.

    [1] https://github.com/libjxl/libjxl/blob/main/doc/format_overvi...

  • JPEG XL v0.9.0 Released
    1 project | news.ycombinator.com | 23 Dec 2023
  • Stripping Metadata
    1 project | /r/jpegxl | 19 Oct 2023
    The cjxl source is here. If you spot any reason why -x strip=exif may not work, tell me.
  • Www Which WASM Works
    2 projects | news.ycombinator.com | 24 Sep 2023
    The problem is that the instructions for actually running the WASM file are not that clear... the docs the author mentions shows how to compile to WASM, which is easy enough, but then here's the instructions to make that actually work in the browser:

    https://github.com/libjxl/libjxl/blob/main/tools/wasm_demo/R...

    Yeah, you need some mysterious Python script, a JS service worker at runtime, choose whether you want the WASM or WASM_SIMD target, use a browser that supports Threads and SIMD if you chose that, make sure to serve everything with the appropriate custom HTTP headers... just reading that, I can see that to get this stuff working on non-browser WASM targets would likely require expertise in WASM, which is the point of the OP. WASM's UX is just not there yet.

  • First automatic JPEG-XL cloud service
    2 projects | news.ycombinator.com | 19 Sep 2023
    https://github.com/libjxl/libjxl#usage

    > Specifically for JPEG files, the default cjxl behavior is to apply lossless recompression and the default djxl behavior is to reconstruct the original JPEG file (when the extension of the output file is .jpg).

  • Why "sudo make install"?
    1 project | /r/linux | 16 Sep 2023
    I mean compiling a bleeding edge kicad, inkscape or jpeg-xl is easy. But will probably trash your system if you already have an older version installed.
  • XYB JPEG: Perceptual Color Encoding Tested
    2 projects | news.ycombinator.com | 20 Jul 2023
    But you look at your image viewer that could have the lossless indicator? (and there is an issue open to add this indicator to the jxl files)

    https://github.com/libjxl/libjxl/issues/432

What are some alternatives?

When comparing filmulator-gui and libjxl you can also consider the following projects:

sosumi-snap

qoi - The “Quite OK Image Format” for fast, lossless image compression

photostructure-for-servers - PhotoStructure is your new home for all your photos and videos. Installation should only take a couple minutes.

Android-Image-Filter - some android image filters

RawTherapee - A powerful cross-platform raw photo processing program

DirectXMath - DirectXMath is an all inline SIMD C++ linear algebra library for use in games and graphics apps

darktable - darktable is an open source photography workflow application and raw developer

libavif - libavif - Library for encoding and decoding .avif files

vello - An experimental GPU compute-centric 2D renderer.

jxl-migrate - A simple Python script to migrate images to the JPEG XL (JXL) format

wallpapers - Wallpapers for Pop!_OS

squoosh - Make images smaller using best-in-class codecs, right in the browser.