-
The time stretch algorithm is implemented in https://github.com/audacity/audacity/blob/master/libraries/l... particularly functions _time_stretch and _process_hop. It looks to me like a classic phase vocoder with vertical phase coherence (c.f. https://en.wikipedia.org/wiki/Phase_vocoder).
The basic idea is this. For a time-stretch factor of, say, 2x, the frequency spectrum of the stretched output at 2 sec should be the same as the frequency spectrum of the unstretched input at 1 sec. The naive algorithm therefore takes a short section of signal at 1s, translates it to 2s and adds it to the result. Unfortunately, this method generates all sorts of unwanted artifacts.
Imagine a pure sine wave. Now take 2 short sections of the wave from 2 random times, overlap them, and add them together. What happens? Well, it depends on the phase of each section. If the sections are out of phase, they cancel on the overlap; if in phase, they constructively interfere.
The phase vocoder is all about overlapping and adding sections together so that the phases of all the different sine waves in the sections line up. Thus, in any phase vocoder algorithm, you will see code that searches for peaks in the spectrum (see _time_stretch code). Each peak is an assumed sine wave, and corresponding peaks in adjacent frames should have their phases match.
-
SaaSHub
SaaSHub - Software Alternatives and Reviews. SaaSHub helps you find the best software and product alternatives
-
The contents are rendered through gtk/cairo which not only goes through https://www.xquartz.org/ but also doesn't use GPU rendering (it was experimental 3 years ago, maybe better now). The main issue seems to be that neither Inkscape nor gtk people have much low level Darwin experts or time available to invest in debugging the whole rendering stack. See for example https://gitlab.com/inkscape/inkscape/-/issues/1614 and all the other referenced issues for all the gory details.
-
The contents are rendered through gtk/cairo which not only goes through https://www.xquartz.org/ but also doesn't use GPU rendering (it was experimental 3 years ago, maybe better now). The main issue seems to be that neither Inkscape nor gtk people have much low level Darwin experts or time available to invest in debugging the whole rendering stack. See for example https://gitlab.com/inkscape/inkscape/-/issues/1614 and all the other referenced issues for all the gory details.
-
This is a big deal! My primary use of Audacity was to create custom edits of tracks I wanted to mix, saving effort later when DJing. Aligning my edits with the beat grid always took a lot of work, so much so that I hacked up a different audio editor, trying to integrate beat detection (https://github.com/marssaxman/gum-audio). Audacity felt like it was frustratingly behind the times in this regard; well, it's five years later now, but I'm glad to see they've made it happen.
-
Total OG oldschool swiss army knife for audio processing | playing .. that takes me back.
In the 1990s in a long workroom of sun workstations we rigged a rlogin sox script to play succesive parts of some spooky music as a co worker walked past each one late one night.
https://sourceforge.net/projects/sox/
https://github.com/chirlu/sox
-
REAPERDenoiser
Tutorial source code: A JSFX denoiser for REAPER based on Norbert Weiner's deconvolution algorithm.