How to write music in Python - three thematic libraries will help (for specialists of different levels)

We continue the topic of musical programming - earlier we talked about the Csound, SuperCollider and Pure Data languages, and today we talk about Python and the FoxDot , Pippi and Music-Code libraries .





Photo by Conor Samuel / Unsplash



FoxDot



This is a library for aspiring muses. programmers. It was developed by engineer Ryan Kirkbride in 2015. FoxDot conceived as a personal project - with the help of Ryan ustaivayte live-session under the pseudonym Qirky - but now with the tool working around the world.



The library uses the capabilities of the Open Sound Control (OCS) packet protocol and the SuperCollider virtual environment for audio synthesis , which was developed in 1996, but is still actively supported.community. The programmer creates objects with arguments indicating the instrument, pitch, duration. Sounds can be patterned and looped to create complex musical designs. The code turns into music in real time - here's an example of working with the library:





If you want to learn the tool yourself, it makes sense to start with the detailed official documentation . Answers to many questions are on the thematic forum . You can leave your suggestions and wishes for new features in the repository on GitHub .



Pippi



This library was developed by one of the representatives of the indie label LuvSound, which supports new music and young artists. In its composition, it has several structures for working with sound, including the common SoundBuffer and Wavetable . The purpose of the Pippi is to work with existing sounds - the instrument allows you to combine and modify loaded samples.



from pippi import dsp

sound1 = dsp.read('sound1.wav')
sound2 = dsp.read('sound2.flac')

# Mix two sounds
both = sound1 & sound2


It also makes it possible to build completely new acoustic designs based on samples - for example, to form "granular" sounds. This is a method in which a sample is divided into many short sections ("granules") and mixed. Here is the code to create a 10 second signal of this format from the audio in the enveloped variable :



# Synthesize a 10 second graincloud from the sound,
# with grain length modulating between 20ms and 2s
# over a triangle shaped curve.
cloudy = enveloped.cloud(10, grainlength=dsp.win('tri', dsp.MS*20, 2))


The Pippi library does not allow for real-time playback of music, so by itself it is not well suited for live concerts. However, in a thematic thread on Hacker News, the author said that he developed a third-party interface - Astrid . It automatically restarts the music file after saving, thus opening up possibilities for performances on stage.



Music-Code



This small library was written by Data Scientist Wesley Laurence . She is able to generate chords, drum and bass sounds. The author uses his tool to create samples for machine learning models. The library allows you to work with sequencers, aggregators, samplers and various acoustic effects. Besides music, Music-Code allows you to prepare visualizations for musical compositions.





Photo by Tanner Boriack / Unsplash



So far, Music-Code has a very small audience, since the library is quite young - it was published on GitHubjust three months ago. However, the author plans to develop his tool and hopes that he will be able to attract new users - especially among specialists in the field of AI systems. The author plans to record and upload a video with instructions on how to get started with Music-Code.






Additional reading in the "World of Hi-Fi":



What is music programming - who is doing it and why

Where to get audio for machine learning: a selection of open libraries

How Sporth works - Java for live music sessions

Where to get audio for developing games and other projects

Sounds for UI: a selection of thematic resources







All Articles