How to learn to feel the tracks? Visualization of musical frequencies in My wave

But it’s not enough to write all this at the first stages – you need to check that everything works as intended.

Here the situation is the same as with real music: for example, with the tuning of a guitar. There are good guitarists who play beautifully, but they prefer to tune the guitar by the tuner, and not by ear. Someone just got used to it, but someone just can’t do it properly on their own, because the hearing is good, but not good enough. If I were a guitarist, I would be one of the latter: by ear, I don’t distinguish frequencies well enough to understand whether my code copes with the task or not.

Therefore, I simply found the samples I needed, chose a specific tone for 1 kilohertz, and then loaded it into the analyzer. The output showed that it was actually 1 kilohertz, and I realized that everything was working as it should.


(Color fill depth – low frequencies, wave bending – medium.)

On the example of the scheme below, we will briefly analyze how this works.

PcmAudioProcessor it just accumulates over 100ms and returns the bytes in PCM format.

AudioVisualizationCenterreceiving these bytes, converts them to the correct format and performs a fast Fourier transform, and then we look for the ranges we need in a complex array and get the number of occurrences and frequency amplitudes.

Then we send data to Flow and for each emit in WaveAnimation send a new value.

VaweAnimationRenderer per wave animation frame request animates the previous AudioData value to the new one, based on elapsed time, given a 100ms data window. We abandoned the ValueAnimator and calculated new animation values ​​for each frame ourselves, which turned out to be much more efficient, because the ValueAnimator has to be created for each new animated value. And since the value is updated every 100ms, we had to spawn the animator every 100ms while the data was coming.

I want to separately note that the visual visibility of rendering and its, say, purity are highly dependent on the genre of music you listen to. This spoiled my collection of feedback from colleagues a little – I asked them to test My wave and tell me what they think. We returned with feedback that, in general, everyone likes everything, it is clear that My wave actually perceives frequencies, but there is no zest, it does not catch. It turned out that all the respondents simply listened to hard rock. And it has a lot of mid and high frequencies, so the analyzer gets clogged, and the output is not the most distinct visualization.

But country and rap showed themselves well. In general, everything is fine in country, clear individual drums, bright guitars, banjo – it is laid out very beautifully and, as a result, it is just as beautifully visualized. Rap for me in this regard was a revelation – not at all my style of music, but it is visualized very beautifully, a certain rhythmic accuracy of how the recitative actually falls into the rhythm and low frequencies shines through (the person’s speech itself is in the middle).

Classical music, unfortunately, is also not so noticeable yet – there are few low frequencies familiar to us, a lot of medium and high ones. Since we don’t render the high ones yet (we haven’t decided how to do it in terms of design yet), only the middle ones remain. With modern classics in terms of visualization, things are much better.

Reflections on the future

I myself used to constantly go to my favorite tracks, but now I listen to music through My wave. Now it is turned on at least once a week by 70% of all Music listeners. Live animation in tandem with My Wave algorithms gives an entertaining user experience: you not only listen to what you like, but you can also see it. At least in terms of frequencies.

Personally, I have a couple of wishes that I hope to implement.

Firstly, due to the fixed size of the time window, audio analysis may not be as accurate as we would like, and some instruments may crash from time to time. For example, a sharp and short beat on the drum, which simply did not enter the analyzed interval, and similar things. A trifle, but can be completed.

Secondly, I want to refine the rendering of high frequencies. They are also part of the music, and it’s a shame that the same lovers of the classics just don’t see it in My wave yet.

If you also have wishes and wishes for the work of My wave in particular and Music in general – write to me in the comments or in a personal message, I will be happy to answer.

Similar Posts

Leave a Reply