How to write music in Python – three thematic libraries will help (for specialists of different levels)

We continue the topic of musical programming – earlier we talked about the Csound, SuperCollider and Pure Data languages, and today we talk about Python and libraries FoxDot, Pippi and Music-Code

Photo Conor samuel / Unsplash


This is a library for aspiring muses. programmers. His developed engineer Ryan Kirkbride in 2015. FoxDot was conceived as a personal project – with its help Ryan settled live sessions under a pseudonym Qirky – however now with the tool work Worldwide.

The library uses the capabilities of the batch protocol Open Sound Control (OCS) and virtual environment for audio synthesis Supercolliderwhich was developed in 1996, but still actively supported community. The programmer creates objects with arguments indicating the instrument, pitch, duration. Sounds can be patterned and looped to create complex musical designs. The code turns into music in real time – here example of work with library:

If you want to learn the tool yourself, it makes sense to start acquaintance with the detailed official documentation… Answers to many questions are on thematic forum… You can leave your suggestions and wishes with new functions in repositories on GitHub


This library was developed by one of the representatives of the indie label LuvSound, which supports new music and young artists. In its composition, it It has several structures for working with sound, including common SoundBuffer and Wavetable… The purpose of the Pippi is to work with existing sounds – the instrument allows you to combine and modify loaded samples.

from pippi import dsp

sound1 ='sound1.wav')
sound2 ='sound2.flac')

# Mix two sounds
both = sound1 & sound2

It also makes it possible to build completely new acoustic designs based on samples – for example, to form “granular” sounds. This is a method in which a sample smash into many short sections (“granules”) and mix them. Here is the code to create a 10 second signal of this format from an audio in a variable enveloped:

# Synthesize a 10 second graincloud from the sound,
# with grain length modulating between 20ms and 2s
# over a triangle shaped curve.
cloudy =,'tri', dsp.MS*20, 2))

The Pippi library does not allow for real-time playback of music, so by itself it is not well suited for live concerts. However, in a topical thread on Hacker News, the author said that developed third party interface – Astrid… It automatically restarts the music file after saving, thus opening up possibilities for performances on stage.


This little library was written by Data Scientist Wesley Lawrence (Wesley laurence). She is able to generate chords, drum and bass sounds. The author uses his tool to create samples for machine learning models. The library allows you to work with sequencers, aggregators, samplers and various acoustic effects. Besides music, Music-Code allows you to prepare visualizations for musical compositions.

Photo Tanner boriack / Unsplash

So far, Music-Code has a very small audience, since the library is quite young – it was published on GitHub just three months ago. However, the author plans to develop his tool and hopes that he will be able to attract new users – especially among specialists in the field of AI systems. The author plans to record and upload a video with instructions on how to get started with Music-Code.

Additional reading at Hi-Fi World:

What is music programming – who does it and why
Where to Get Audio for Machine Learning: A Compilation of Open Source Libraries
How Sporth works – a language software for music live sessions
Where to get audio for game development and other projects
Sounds for UI: a selection of thematic resources

Similar Posts

Leave a Reply

Your email address will not be published. Required fields are marked *