AudioUnit processing buffer structures

There is some confusion about AudioUnit buffer processing structure, and I occasionally get questions about it, so I thought I would put together a real basic primer to get the ball rolling. This is just a simple post with a few examples, but it would have saved me a decent chunk of time and frustration if I had this info when I first started.

AudioUnits have several options when it comes to input/output structure, both with unique benefits and drawbacks:

Kernel Process: This structure is useful for either mono or n-n channel effects that have no interaction between the channels. An AU kernel is by nature self-contained, and processes a single audio stream. This kernel can be duplicated as needed by the host application to match the input and output demands, OR the structure can be explicitly stated such as 2-in-4-out, 1-in-7-out, etc, etc. (Implicit default is [-1, -1] mode).

ProcessBufferLists: This structure will be familiar to anyone who is used to the VST processReplacing methods. It makes for slightly more complicated code as you have to explicitly call up the buffers in the processing function and assign them to pointers, but it allows for processing interaction between channels. This is much more useful for effects like reverbs, ping-pong delays, stereo-wideners, etc.

Here are examples of each. I tried to keep the code as similar as possible to make the similarities and differences clear:

In the header File:

Kernel NtoNEffect:

class NtoNEffectKernel : public AUKernelBase	// create a kernel here
{

public:
	NtoNEffectKernel(AUEffectBase *inAudioUnit );
    
	// processes one channel of non-interleaved samples
	virtual void 		Process(	const Float32 	*inSourceP,
                                Float32		 	*inDestP,
                                UInt32 			inFramesToProcess,
                                UInt32			inNumChannels,
                                bool &			ioSilence);
    
private:

};

class NtoNEffect : public AUEffectBase
{
public:
	NtoNEffect(AudioUnit component);
    
	virtual AUKernelBase *		NewKernel() {return new NtoNEffectKernel(this);}
    
protected:

};

Compared to:

ProcessBufferLists IndependentLREffect:

class IndependentLREffect : public AUEffectBase {
    
public:
  virtual OSStatus ProcessBufferLists(AudioUnitRenderActionFlags &ioActionFlags,
                                      const AudioBufferList &inBuffer,
                                      AudioBufferList &outBuffer,
                                      UInt32 inFramesToProcess);
    
private:
    
};

And in the cpp File:

Kernel NtoNEffect:

void NtoNEffect::NtoNEffectKernel::Process(	const Float32 	*inSourceP,
                           Float32 		*inDestP,
                           UInt32 			inFramesToProcess,
                           UInt32			inNumChannels,
                           bool &			ioSilence)
{
    for(UInt32 i = 0; i < inFramesToProcess; ++i) // or use a 'while loop' like: while(inFramesToProcess--)
    {
        *inDestP++ = *inSourceP++;
    }
}

Compared to:

ProcessBufferLists IndependentLREffect:

OSStatus IndependentLREffect::ProcessBufferLists(AudioUnitRenderActionFlags &ioActionFlags,
                                      const AudioBufferList &inBuffer,
                                      AudioBufferList &outBuffer,
                                      UInt32 inFramesToProcess)
{
  const Float32 *inSourcePL = (Float32 *)inBuffer.mBuffers[0].mData;
  const Float32 *inSourcePR = (Float32 *)inBuffer.mBuffers[1].mData;
    
  Float32 *inDestPL = (Float32 *)outBuffer.mBuffers[0].mData;
  Float32 *inDestPR = (Float32 *)outBuffer.mBuffers[1].mData;

  for(UInt32 frame = 0; frame < inFramesToProcess; ++frame) {
    *inDestPL++ = *inSourcePL++;
    *inDestPR++ = *inSourcePR++;
  }
}

Hopefully those examples are clear enough to demonstrate the basics.

As an addendum, here is an example of how to set the channel structure:

In the header file, use this method:

virtual UInt32 SupportedNumChannels(const AUChannelInfo ** outInfo);

And in the cpp file:

UInt32 MyPlugin::SupportedNumChannels (const AUChannelInfo** outInfo)
{
	static const AUChannelInfo sChannels[1] = {{1,1}}; // <-mono only. {{2,2}} for stereo, etc
	if (outInfo) *outInfo = sChannels;
	return sizeof (sChannels) / sizeof (AUChannelInfo);
}

As an example of explicitly supporting 3 modes — 1×1, 1×2, 2×2 — use:

static const AUChannelInfo sChannels[3] = {{1,1},{1,2},{2,2}};

AudioUnitProperties.h has more details.

EDIT: I just found this decade-old document from the old AudioUnit SDK which may be helpful.  It has some more explicit details about IO modes:

http://www.solamors.com/IOChannelConfigurations.rtf

Advertisements

About alexkenis

Guitarist, philosopher, tinkerer

10 comments

  1. luis

    Thanks a lot for the great posts!
    I am trying to build an AU with TWO audio inputs and one output (let’s say, kind of a 2-1 mixer). Even though both buses are strictly mono, I prefer to use two mono inputs than one single “fake” stereo input. Do you know how this affects to the method “Process” and its pointer Float32 *inSourceP? How can I access the audio data from each input? Thanks a lot in advance,
    Luis

  2. Using the ‘Process’ method drawing from kernel base, you have to match input and output numbers since it creates a new kernel with one input and one output per channel (n-to-n). If you would like a stereo in, mono out effect (m-to-n), you would ‘ProcessBufferLists’ from AUBase subclassing, explicitly state the channel configuration, and write the mixer code yourself.

    I found an old document from Apple (IOChannelConfigurations.rtf) that goes into some detail about IO configurations and added a link at the end of the blog post. It might be helpful.

  3. luis

    Thanks a lot for the quick reply. Really really useful. Now I will have to figure out how to actually get the input scopes in Process, but I think I have a clue (also from another of your posts). I will post longer when I get there. Thanks again!

  4. luis

    I tried your code in this page. It works good, but one must be careful with handling the pointers “inDestPL” and “inSourcePL” because they point to the same memory location. So, swapping the stereo channels is not accomplished by swapping inSourcePL/R (you get two channels with the same audio instead), but by using a bridge variable. It makes sense to be so anyway.

  5. luis

    Hi Alex, I finally managed to create a sort of mixer AU template, with two (or more) inputs and one output. Help from Apple support was needed, but it was not so difficult after all. How can I send it to you (in case you are interested)?
    On the other hand, I am quite interested on the coding details of your “Quick and dirty oscilloscope/waveform AU plugin”. I can donate. Please let me know. Thanks, Luis

    • Oh nice! I’m glad you got that sorted. Yeah, I’ll take a copy. The oscilloscope plugin was an old Apple sample code that I really just updated, I’ll see if I can dig up the project and get it into working order with the newest Xcode, and I’ll post it on that blog if I can find it.

      • Luis Weruaga

        Attached the template in a zip file.

        AUBase is directly subclassed, while “Render” is required to pull the input data (other methods, such as ProcessBufferLists or ProcessMultipleBufferLists, do not work).

        This example is tailored to two inputs and one output, but extension to multiple inputs is clearly straightforward. However, if more outputs are needed, then “RenderBus” must be used to pull and push the signals (according to Apple’s evangelist). It is something that I have not tried yet though.

        “Render” uses ringBuffers to store consecutive segments overlapping 50% in time, the same way many DSP applications work with streams. Also a call to a processing thread, when a new block of data is available, is there. If I remember correctly, this part I did it on my own, and I did not get inspired from elsewhere. But it works 🙂

        Let me know if you have comments and if it actually works by you. Enjoy!

        Luis

        >

      • I added a link to a ZIP of the Oscilloscope project to that blog post, so you can grab it there. It should build fine with a little work (the paths to the utility classes are wonky, so you might have to fix that so the compiler can find the files) but the code is all there and relatively up to date. I had been intending to add a trigger hold function since that would be more useful than just the ‘hold’ button, but I just haven’t had the time.

      • Luis Weruaga

        Dear Alex,

        I forgot to thank you for the code. Thanks a lot. Yes, I fixed a couple of things (this learning curve never ends) and I got it working. Awesome! Your example must have been from the time of V1 AUs. By the way, my Apple guy mentioned literally “it will most likely be a v3 AU sample since V2 AUs aren’t going to be something we would want people to adopt moving forward with 10.11 and iOS 9 and future.” So, a change in AUs is on the making. It is about time, and hopefully sorting out all the annoying warnings and deprecated types. Best,

        Luis

        >

      • No problem. Yeah, I think it was from back when they distributed Xcode and sample projects on CD, so I don’t think that code is floating around anywhere else. I figured as much about the AUv3. I paused development on a few things until more complete AU v3 information and example code is more fleshed out, so hopefully Apple will make that happen soon.

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s

%d bloggers like this: