Beep, Beep, I’m a sheep

In a new translated article, we discuss how to create a beeper across different platforms.

Audio I / O is a tricky topic that scares many musicians who are programming and programmers who are into music. Let’s try to figure this out! In this article, we will discuss how sound works on each modern OS (desktop version).

Our today’s case will be considered using a simple beeper as an example. Remember that annoying thing inside your PC box that makes an unpleasant buzzing sound? Now it has become just a memory! I suggest making a library that reproduces similar sounds on all OS.

The end result is available at this link

WINDOWS

We’re in luck with Windows: there is already a function here Beep (freqency, duration) in … We can use it.

This function very long and complicated history… It was introduced to play audio signals through a hardware beeper using the 8245 programmable timer. As more and more computers were released without a beeper, this feature became obsolete over time. However, in Windows 7 it was rewritten to play audio signals using the sound card API.

However, the apparent simplicity of this feature hides the complexity of all Windows sound APIs. 1991 was released MME… It is used by default for audio as it has good support.

MME is known to have high playback latency and will probably not be suitable for most audio applications. Also released in 2007 WASAPI… It has lower latency, especially when used in exclusive mode (a mode where the user cannot listen to Spotify or any other application while your application is running). WASAPI is a good choice for audio applications, however, pay attention to DirectSoundwhich is a WASAPI wrapper for interacting with DirectX.

If unsure, use WASAPI.

LINUX

Audio is one of the few areas where the Linux API is as cool as other platforms. First of all, it should be said about ALSA, which is part of the core itself.

ALSA interacts directly with hardware, and if you want your application to work with sound exclusively, ALSA can be a good compromise between complexity and performance. If you’re building a synthesizer or sampler for the Raspberry Pi, ALSA is a good choice.

In addition, there is PulseAudio, an audio abstraction layer built on top of ALSA. It routes audio from various applications and tries to mix audio streams so that critical applications do not suffer from latency issues. While PulseAudio provides many features that would not be possible with ALSA (such as routing audio over the Internet), most music applications do not use it.

Many use JACK Audio Connection Kit… JACK was created for professional musicians. It takes care of real-time playback, whereas PulseAudio was created for casual users who may suffer some lag when playing YouTube videos. JACK connects audio applications with minimal latency, but keep in mind that it still runs on top of ALSA, so if your application is going to be the only audio application running (for example, if you are building a drum machine from an old Raspberry Pi), then ALSA is much easier to use and better performance too.

Making a beeper function using ALSA is actually not that difficult. We need to open the default audio device, configure it to use a well-supported sample rate and sample format, and start writing data to it. The audio data can be a sawtooth wave as described in previous article

int beep(int freq, int ms) {
  static void *pcm = NULL;
  if (pcm == NULL) {
    if (snd_pcm_open(&pcm, "default", 0, 0)) {
      return -1;
    }
    snd_pcm_set_params(pcm, 1, 3, 1, 8000, 1, 20000);
  }
  unsigned char buf[2400];
  long frames;
  long phase;
  for (int i = 0; i < ms / 50; i++) {
    snd_pcm_prepare(pcm);
    for (int j = 0; j < sizeof(buf); j++) {
      buf[j] = freq > 0 ? (255 * j * freq / 8000) : 0;
    }
    int r = snd_pcm_writei(pcm, buf, sizeof(buf));
    if (r < 0) {
      snd_pcm_recover(pcm, r, 0);
    }
  }
  return 0;
}

Here we use a synchronous API and don’t check for errors to keep the function short and simple. Synchronous blocking I / O is probably not the best option for serious audio applications, and thankfully ALSA comes with different transfer methods and operating modes: link… But for our simple experiment, this is quite enough. When in doubt, use ALSA. If you have to interact with other audio applications, use JACK.

MACOS

In the case of MacOS, everything is quite simple, but not quite elementary.

MacOS has a framework CoreAudioresponsible for sound functions on desktop and iOS. CoreAudio itself is a low-level API tightly integrated with the OS to optimize latency and performance. To play audio using CoreAudio, you need to create an AudioUnit (audio plugin). The AudioUnit API is a bit long, but easy to understand. Here’s how to create a new AudioUnit:

AudioComponent output;
AudioUnit unit;
AudioComponentDescription descr;
AURenderCallbackStruct cb;
AudioStreamBasicDescription stream;

descr.componentType = kAudioUnitType_Output,
descr.componentSubType = kAudioUnitSubType_DefaultOutput,
descr.componentManufacturer = kAudioUnitManufacturer_Apple,

// Actual sound will be generated asynchronously in the callback tone_cb
cb.inputProc = tone_cb;

stream.mFormatID = kAudioFormatLinearPCM;
stream.mFormatFlags = 0;
stream.mSampleRate = 8000;
stream.mBitsPerChannel = 8;
stream.mChannelsPerFrame = 1;
stream.mFramesPerPacket = 1;
stream.mBytesPerFrame = 1;
stream.mBytesPerPacket = 1;

output = AudioComponentFindNext(NULL, &descr);
AudioComponentInstanceNew(output, &unit);
AudioUnitSetProperty(unit, kAudioUnitProperty_SetRenderCallback,
										 kAudioUnitScope_Input, 0, &cb, sizeof(cb));
AudioUnitSetProperty(unit, kAudioUnitProperty_StreamFormat,
										 kAudioUnitScope_Input, 0, &stream, sizeof(stream));
AudioUnitInitialize(unit);
AudioOutputUnitStart(unit);

This code only creates and starts a new AudioUnit, the actual sound generation will happen asynchronously in the callback:

static OSStatus tone_cb(void *inRefCon,
                        AudioUnitRenderActionFlags *ioActionFlags,
                        const AudioTimeStamp *inTimeStamp, UInt32 inBusNumber,
                        UInt32 inNumberFrames, AudioBufferList *ioData) {
  unsigned int frame;
  unsigned char *buf = ioData->mBuffers[0].mData;
  unsigned long i = 0;
  for (i = 0; i < inNumberFrames; i++) {
    buf[i] = beep_freq > 0 ? (255 * theta * beep_freq / 8000) : 0;
    theta++;
    counter--;
  }
  return 0;
}

This callback generates audio in the same way as we did with ALSA, but it is called asynchronously when CoreAudio thinks that the audio buffer is almost empty and needs to be filled with new audio samples.

This asynchronous approach to sound generation is very common, and almost every modern audio library supports it. If you want to create a music application, you should design it with asynchronous playback in mind.

When in doubt, use CoreAudio.

Sounds complicated, right?

If you’re building a music app, you can follow the same path by implementing an audio backend for WASAPI, ALSA, and CoreAudio. In fact, it is not that difficult. You can see the full sources beeper, that’s about 100 lines of code for all three platforms.

However, there are a number of good cross-platform libraries such as:

  • RtAudio + RtMidi (very easy to use, one .cpp and .h file)
  • PortAudio + PortMiidi (written in C and slightly larger), has many different backends
  • SoundIO Is a wonderful little library from the creator of Zig.

Some people prefer to use JUCE for cross-platform audio applications, but it has its limitations.

All of the above may seem like a daunting task, but there are many implementations, and most of them are good. So keep trying!

I hope you enjoyed this article. You can track news and projects on Github, in Twitter or subscribe through rss

Similar Posts

Leave a Reply

Your email address will not be published. Required fields are marked *