Am I Not Able to Read "Formalised Music" if I Don't Have Advanced Education in Mathematics?

This page summarizes the projects mentioned and recommended in the original post on /r/composer

Our great sponsors
  • SurveyJS - Open-Source JSON Form Builder to Create Dynamic Forms Right in Your App
  • WorkOS - The modern identity platform for B2B SaaS
  • InfluxDB - Power Real-Time Data Analytics at Scale
  • diy-audio

    DIY: Audio, workshop for audio programming

  • To get you started: 1. https://sonic-pi.net/ is a good place to experiment with using code to generate music and live-coding 2. You can also try generating midi (or https://abcnotation.com/ and convert it to midi)... then use some midi player to play it 3. To base it on data, use data to drive one of the two approaches above. 4. Coding a synth... I did an intro workshop on it a while ago https://github.com/adinfinit/diy-audio, https://www.youtube.com/watch?v=Dnpa1tBI6-E. 5. Connecting a synth from building blocks... This is kind of like modular synthesis, but with code... you can look at Pure Data or SuperCollider for it. AFAIR Sonic-Pi also supports similar approach. 6. Speech synthesis - look into formant synthesis. You'll need to know how to write or combine a basic synth and filter for this one. 7. Audio analysis - this goes deeper into mathematics, look for books on digital signal processing. Try understanding fourier transform, as it's one of the foundational building blocks. 8. Audio visualization - usually this involves doing a short-time FFT and adjusting a visual image based on that information. For this you need to look into programmatic graphics. Processing is a good environment to get started. I also did an intro workshop to visuals programming: https://github.com/adinfinit/diy-visuals, https://www.youtube.com/watch?v=DCnU2s8icIE -- instead of a mouse, let audio drive that information. 9. Audio visualization - you can also look into writing shaders based on fft data. There are a bunch of tutorials on the topic, here's one that looked decent at first glance https://www.learnwithjason.dev/build-your-own-audio-visualization-in-a-shader. 10. For composition analysis you could try automatically detecting chords based on midi data. But, see "Computation Musicology".

  • diy-visuals

    DIY: Visuals, workshop for visuals programming

  • To get you started: 1. https://sonic-pi.net/ is a good place to experiment with using code to generate music and live-coding 2. You can also try generating midi (or https://abcnotation.com/ and convert it to midi)... then use some midi player to play it 3. To base it on data, use data to drive one of the two approaches above. 4. Coding a synth... I did an intro workshop on it a while ago https://github.com/adinfinit/diy-audio, https://www.youtube.com/watch?v=Dnpa1tBI6-E. 5. Connecting a synth from building blocks... This is kind of like modular synthesis, but with code... you can look at Pure Data or SuperCollider for it. AFAIR Sonic-Pi also supports similar approach. 6. Speech synthesis - look into formant synthesis. You'll need to know how to write or combine a basic synth and filter for this one. 7. Audio analysis - this goes deeper into mathematics, look for books on digital signal processing. Try understanding fourier transform, as it's one of the foundational building blocks. 8. Audio visualization - usually this involves doing a short-time FFT and adjusting a visual image based on that information. For this you need to look into programmatic graphics. Processing is a good environment to get started. I also did an intro workshop to visuals programming: https://github.com/adinfinit/diy-visuals, https://www.youtube.com/watch?v=DCnU2s8icIE -- instead of a mouse, let audio drive that information. 9. Audio visualization - you can also look into writing shaders based on fft data. There are a bunch of tutorials on the topic, here's one that looked decent at first glance https://www.learnwithjason.dev/build-your-own-audio-visualization-in-a-shader. 10. For composition analysis you could try automatically detecting chords based on midi data. But, see "Computation Musicology".

  • SurveyJS

    Open-Source JSON Form Builder to Create Dynamic Forms Right in Your App. With SurveyJS form UI libraries, you can build and style forms in a fully-integrated drag & drop form builder, render them in your JS app, and store form submission data in any backend, inc. PHP, ASP.NET Core, and Node.js.

    SurveyJS logo
  • Sonic Pi

    Code. Music. Live.

  • To get you started: 1. https://sonic-pi.net/ is a good place to experiment with using code to generate music and live-coding 2. You can also try generating midi (or https://abcnotation.com/ and convert it to midi)... then use some midi player to play it 3. To base it on data, use data to drive one of the two approaches above. 4. Coding a synth... I did an intro workshop on it a while ago https://github.com/adinfinit/diy-audio, https://www.youtube.com/watch?v=Dnpa1tBI6-E. 5. Connecting a synth from building blocks... This is kind of like modular synthesis, but with code... you can look at Pure Data or SuperCollider for it. AFAIR Sonic-Pi also supports similar approach. 6. Speech synthesis - look into formant synthesis. You'll need to know how to write or combine a basic synth and filter for this one. 7. Audio analysis - this goes deeper into mathematics, look for books on digital signal processing. Try understanding fourier transform, as it's one of the foundational building blocks. 8. Audio visualization - usually this involves doing a short-time FFT and adjusting a visual image based on that information. For this you need to look into programmatic graphics. Processing is a good environment to get started. I also did an intro workshop to visuals programming: https://github.com/adinfinit/diy-visuals, https://www.youtube.com/watch?v=DCnU2s8icIE -- instead of a mouse, let audio drive that information. 9. Audio visualization - you can also look into writing shaders based on fft data. There are a bunch of tutorials on the topic, here's one that looked decent at first glance https://www.learnwithjason.dev/build-your-own-audio-visualization-in-a-shader. 10. For composition analysis you could try automatically detecting chords based on midi data. But, see "Computation Musicology".

NOTE: The number of mentions on this list indicates mentions on common posts plus user suggested alternatives. Hence, a higher number means a more popular project.

Suggest a related project

Related posts