Our great sponsors
-
WorkOS
The modern identity platform for B2B SaaS. The APIs are flexible and easy-to-use, supporting authentication, user identity, and complex enterprise features like SSO and SCIM provisioning.
-
No-Reference-Image-Quality-Assessment-using-BRISQUE-Model
Implementation of the paper "No Reference Image Quality Assessment in the Spatial Domain" by A Mittal et al. in OpenCV (using both C++ and Python)
-
InfluxDB
Power Real-Time Data Analytics at Scale. Get real-time insights from all types of time series data with InfluxDB. Ingest, query, and analyze billions of data points in real-time with unbounded cardinality.
Of course, everyone says "Stream" but you could just drop an iframe and the packets after it until the next iframe in a vector, serialize it, and send that across the network with zmq or something. Kind of like this, although funnily I still run into issues setting up the timing correctly in ffmpeg's muxer. I very rarely ever actually write video out anywhere, so that hasn't been a high priority.
I've kind of made a substitute, and which it's a CLI program, it does what Handbrake used to do for me. That link is to the CLI program, but should be readable enough to make a decision. It's specifically for re-encoding video files; Handbrake and ffmpeg do so much more.
I used Editly to make a clip of several videos with transitions and screenshots. It worked out great with me just editing the json5 file to tweak things
But that open ecosystem also enables some really unique capabilities! In particular I've recently been excited about the gst-meet plugin. This allows using Jitsi Meet meetings as a src/sink. And if you use a meeting as a source, you can extract each participants video/audio as a separate stream.
I've been working on a C++ API for it, but it's not coming along very quickly as a rarely feel like programming in my spare time these days. I also have a couple of different examples of streaming with it, one with ngnix and mpeg-dash and one with webm/html5. The webm/html5 one works a lot better but uses ffserver, which is deprecated in recent versions of ffmpeg.
I've been working on a C++ API for it, but it's not coming along very quickly as a rarely feel like programming in my spare time these days. I also have a couple of different examples of streaming with it, one with ngnix and mpeg-dash and one with webm/html5. The webm/html5 one works a lot better but uses ffserver, which is deprecated in recent versions of ffmpeg.
They originally wanted the system mainly for video quality assessment, but were far more interested in it for several other things once we started showing it off. I did have some success modifying Krshrimali's Brisque image quality assessment code into a library with an API which I could process individual video frames with, but this turned out to be very slow without GPU acceleration (Neighborhood of 2 seconds per frame IIRC for 1080, IIRC,) and didn't put a lot more experimentation into it. I think even with GPU acceleration it'd be hard to get that down to real-time. Might be OK for short clips, though.
not documented (example: https://github.com/FFmpeg/FFmpeg/blob/master/libavcodec/aac_parser.c has no comments explaining the details of what it is supposed to be doing, which is counter to what one learns in university regarding how to properly program). ffmpeg seems to be just a code dump of someone and all the knowledge required to efficiently modify the program is distributed in the heads of its authors, which makes it a amateuristic endeavour (it might be the best in the world, but that's independent of the qualification regarding quality). (Good open-source programs exist, but popular software almost never is, because large good programs are basically a fairy tale. )