Show HN: Glicol(Graph-Oriented Live Coding Language) and DSP Lib Written in Rust

This page summarizes the projects mentioned and recommended in the original post on news.ycombinator.com

Our great sponsors
  • Onboard AI - Learn any GitHub repo in 59 seconds
  • InfluxDB - Collect and Analyze Billions of Data Points in Real Time
  • SaaSHub - Software Alternatives and Reviews
  • glicol

    Graph-oriented live coding language and music/audio DSP library written in Rust

    It has been about 2 years since I started developing Glicol. You can try it here:

    https://glicol.org

    Recently I added support for VST, Bela and responsive design for mobile devices. It would be great if you can try and give me some feedback here or in the GitHub repo:

    https://github.com/chaosprint/glicol

    Thanks!

  • faust

    Functional programming language for signal processing and sound synthesis (by grame-cncm)

    Thanks for sharing! Look forward to your Ugen as well. It's great to see that many languages have supported sample-level control, including the Pd and Max you mentioned. There was a discussion on Faust repo before that might be interesting for you too: https://github.com/grame-cncm/faust/issues/685

  • Onboard AI

    Learn any GitHub repo in 59 seconds. Onboard AI learns any GitHub repo in minutes and lets you chat with it to locate functionality, understand different parts, and generate new code. Use it for free at www.getonboard.dev.

  • Camomile

    An audio plugin with Pure Data embedded that allows to load and to control patches

  • vst-rs

    VST 2.4 API implementation in rust. Create plugins or hosts. Previously rust-vst on the RustDSP group.

    https://youtu.be/yFKH9ou_XyQ

    If you want your own vst (with your name on the author and you can sell),you can start with vst-rs:

    https://github.com/RustAudio/vst-rs

    Wanna some GUI, here is a template:

  • egui_baseview_test_vst2

    Barebones egui_baseview vst2 plugin with basic parameter control

    https://github.com/DGriffin91/egui_baseview_test_vst2

    In writing the vst, you may need some rust audio lib, you can search dasp, fundsp, or wait for glicol_synth published as a Rust crate (like pip or npm)

  • faustgen-supercollider

    Livecode Faust in SuperCollider using an embedded Faust compiler.

    > In Glicol, you can use different node directly as UGens and you can also define your own `meta` node in real-time,

    That's cool!

    FWIW, Mads has been working on a Faust UGen that can compile Faust code on the fly: https://github.com/madskjeldgaard/faustgen-supercollider

    Also, there is an open-source implementation of Reaper's JSFX language: https://github.com/asb2m10/jsusfx. The Repo already contains Max and Pd objects. I have been thinking of contributing a SuperCollider UGen, but there are so many other things on my list :-)

  • jsusfx

    Opensource Jesusonic FX implementation

    > In Glicol, you can use different node directly as UGens and you can also define your own `meta` node in real-time,

    That's cool!

    FWIW, Mads has been working on a Faust UGen that can compile Faust code on the fly: https://github.com/madskjeldgaard/faustgen-supercollider

    Also, there is an open-source implementation of Reaper's JSFX language: https://github.com/asb2m10/jsusfx. The Repo already contains Max and Pd objects. I have been thinking of contributing a SuperCollider UGen, but there are so many other things on my list :-)

  • InfluxDB

    Collect and Analyze Billions of Data Points in Real Time. Manage all types of time series data in a single, purpose-built database. Run at any scale in any environment in the cloud, on-premises, or at the edge.

  • ddwChucklib-livecode

    A live-coding interface for chucklib objects

    I guess it depends on what you want to do. For typical Algorave-style music, I would agree that sclang doesn't offer the right high-level abstractions. However, it's possible to implement live coding dialects on top of sclang, e.g.: https://github.com/jamshark70/ddwChucklib-livecode

  • ixilang

    A live coding language. An extension to SuperCollider, currently Cocoa only.

  • pure-data

    Pure Data - tracking Miller's SourceForge git repository (also used by libpd) (by Spacechild1)

    FWIW, Pd and Max/MSP always had sample-level control in the sense that subpatches can be reblocked. For example, if you put a [block~ 1] object in a Pd subpatch, the process function will be called for every sample, so you can have single-sample feedback paths. Pd also has the [fexpr~] object which allows users to write FIR and IIR filters in a simple expression-syntax. Finally, Max/MSP offers the very powerful [gen~] object. You can check it out for inspiration (if you haven't already).

    Pd (and Max/MSP) also allow to upsample/resample subpatches, which is important for minimizing aliasing (caused by certain kinds of processing, such as distortion).

    Pd also uses the reblocking mechanism to implement FFT processing. The output of [rfft~] is just an ordinary signal that can be manipulated by the usual signal objects. You can also write the output to a table, manipulate it in the control domain with [bang~], and then read it back in the next DSP tick. IMO, this is a very powerful and elegant approach. SuperCollider, on the other hand, only supports a single global blocksize and samplerate which prevents temporary upsampling + anti-aliasing, severly limits single-sample feedback and leads to a rather awkward FFT implementation (you need dedicated PV_* objects for the most basic operations, such as addition and multiplication).

    Another thing to think about is multi-threaded DSP. With Supernova, Tim Blechmann miraculously managed to retrofit multi-threading onto scsynth. Max/MSP offers some support for multi-threading (IIRC, top level patches and poly~ instances run in parallel). Recently, I have been working on adding multi-threading to Pd (it's working, but still very much experimental): https://github.com/Spacechild1/pure-data/tree/multi-threadin.... If you design an audio engine in 2022, multi-threading should be considered from the start; you don't have to implement it yet, but at least leave the door open to do it at a later stage.

    ---

    I'm not sure how far you want to go with Glicol. I guess for the typical Algorave live coder all these things are probably not important. But if you want Glicol to be a flexible modern audio engine/library, you will have to think about FFT, upsampling, single-sample feedback, multi-processing etc. at some point. My advice is to not leave these things as an afterthought; you should at least think about it from the start while designing your engine - if you want to avoid some of the mistakes that other existing audio engines made. This is just a word of "warning" from someone having spent countless of hours in Pd and SuperCollider source code :-)

NOTE: The number of mentions on this list indicates mentions on common posts plus user suggested alternatives. Hence, a higher number means a more popular project.

Suggest a related project

Related posts