Hi team ,
I have a question regarding the IPCBuffer module in the Audio Weaver framework.
Could you please clarify the following:
-
Does the IPCBuffer module handle all inter-core data routing automatically, or is any additional configuration required in the code?
-
Is there any specific hardware or system requirement needed to support the IPCBuffer, or is it purely software-based?
-
Are there any best practices or specific limitations I should be aware of when using IPCBuffer module in a multi-instance or multi-core design?
Looking forward to your response and any guidance you can provide.
Best regards,
Monika V
2:03pm
Hi Monika,
Here’s how IPCBuffer behaves, based on the spec you linked and the multi‑instance docs.
Please note that the IPCBuffer module is both Deprecated and marked "BETA". The current replacement for this is the "ChangeThreadV2", which can manage passing data between different clock dividers on a single instance as well as passing data between instances (cores) via shared memory.
1. Does IPCBuffer handle inter‑core routing automatically?
Conceptually yes, but with some prerequisites and Designer‑side configuration:
Once the shared heap and multi‑instance target are set up, you do not need to write extra routing code for audio data between instances; IPCBuffer + IPCFifoIn/Out and the AWECore shared heap handle that.
However:
You must:
targetInstance(0–255, and not greater than the number of instances actually available on the target).You do not:
So: once multi‑instance + shared heap are integrated in your platform, IPCBuffer gives you “automatic” inter‑instance routing from the layout point of view.
2. Hardware / system requirements vs. pure software
Shared Heap Required
“The IPCBuffer module requires a target to have shared heap, and for each AWE instance to be initialized with the same shared heap. If shared heap is not present on the target, a design containing IPCBuffer will fail to build.”
Shared heap is part of the AWECore integration.
Multi‑Instance AWECore Required
IPCBuffer requires a multi‑instance target. If you only expose one instance of AWECore in AWE_Server, any layout using IPCBuffer should fail to build.
Block Size Alignment
IPCBuffer shares the same constraints as multi‑instance AWE:
All instances must use the same fundamental block size.
“Hardware” requirement is really:
Multiple cores/processing contexts that:
Can access a common memory region (the shared heap).
Have an underlying inter‑processor communication/synchronization mechanism (interrupts/IPCC/etc.) implemented in your platform BSP code. AWECore assumes these exist and uses the shared heap; IPCBuffer itself is still “just software,” but it depends on that platform‑level multi‑core integration.
So IPCBuffer is software‑only within AWE, but it requires:
A system with:
Shared memory region mapped to all cores.
Properly configured AWE shared heap in that region for all instances.
A multi‑instance AWECore integration that keeps instances synchronized and can signal pumps on each instance.
3. Best practices & limitations for multi‑instance / multi‑core use
Based on the spec and the multi‑instance docs, here are the main points.
3.1 Core limitations to keep in mind
Must have shared heap and all instances must be initialized with the same shared heap pointer(s).
Must be a multi‑instance target (
targetInstancecannot exceed number of instances available).Fundamental block size must match across instances.
IPCBuffer is marked as BETA:
The module text label includes “BETA”.
Doc explicitly says: this is to allow compatibility‑breaking changes in future releases.
3.2 Instance & thread configuration
Use
targetInstanceto select the destination AWECore instance (0–255).IPCBuffer also supports:
bufferUpDownFactor– change clock divider and buffer size, like BufferUpV2/BufferDownV2:0(default): inherit block size and clock divider.> 0: buffer up, output block size =factor × input block size.< 0: buffer down, output block size =input block size / abs(factor).layoutSubID– output thread ID:0–15 map to threads A–P.
16 = “Inherit” (default).
Best practice: use these to avoid chaining extra BufferUp/Down and to place the downstream processing in the appropriate thread on the target instance, to control latency and scheduling.
3.3 Latency & buffering
Latency introduced is equal to one output block size per pin:
Text label prints this latency (in samples).
For multi‑stage / multi‑hop routing (e.g., instance 0 → instance 1 → instance 2) latency will accumulate.
Best practice:
Minimize the number of IPCBuffer hops in the critical audio path.
Carefully choose
bufferUpDownFactorto align with your real‑time constraints: smaller blocks = lower latency but more CPU / scheduling pressure.3.4 Synchronization & overflow in multi‑core environments
From the “IPC Buffer FifoIn and FifoOut synchronization” section:
IPCFifoIn (writer) & IPCFifoOut (reader) use:
Double‑buffering (ping/pong).
A shared boolean flag and shared‑heap state to detect if one side “missed” a pump.
In embedded targets with tightly synchronized pumps, this is usually straightforward.
In OS environments like Linux/Win32, pumps can be delayed and synchronization can be lost:
When writer toggles, it checks if reader finished – otherwise, overflow.
When reader toggles, it checks if writer finished – otherwise, overflow.
On overflow, either module sets a flag via
awe_fwSetOverflowOccurredMulti(); the Layout component performs auto re‑synchronization when it sees this.Best practices:
Ensure your multi‑instance integration calls
awe_audioPump(or equivalent) regularly and coherently across cores, aligned with the fundamental block size.Monitor/handle overflow:
Overflows mean that some audio blocks were dropped or late; avoid sustained overload.
If you see frequent resync events in testing, you may need:
Higher thread priority for pump tasks.
Larger block sizes (trading latency for robustness).
Fewer IPCBuffer crossings in heavy CPU conditions.
3.5 Multi‑instance / multi‑core design patterns
Fan‑out / fan‑in:
You can use multiple IPCBuffers to:
Send a source from one instance to multiple consumer instances.
Aggregate from many instances into one.
Requirement explicitly states: “There shall be no limitations imposed to the user as to number of times data can be sent between instances.”
Multi‑pin usage:
numPinscontrols the number of input/output pins; supports >1.Use multiple pins rather than multiple separate IPCBuffer modules if the channels travel together – it’s simpler and keeps text‑label latency reporting aligned.
Data types:
All 32‑bit data types supported, plus complex float pairs.
You can safely pass complex streams (e.g., FFT bins) across instances.
2:58pm
Hi Gary,
Thank you for sharing the detailed information.
Let me first explain my requirements, and I would appreciate your suggestions on how to implement it.
I am exploring the multi-instance concept where Core 1 is responsible for importing and exporting audio samples, and Core 2 processes these samples using a simple audio pipeline (which is essentially multi-instance processing). My goal is to transfer the audio data from Core 1 to Core 2 without using hardware peripherals. Instead, I want to leverage AWE modules like IPC Buffer and ChangeThread via shared memory, to avoid writing custom BSP code.
My queries are as follows:
What exactly does "Deprecated" and "BETA" mean in this context?
I have allocated a shared heap that acts as common memory for both cores. When using the ChangeThread/IPCBuffer module in the audio pipeline, do I need to handle any callback functions, such as IPC notify-send or IPC send-reply, to transfer audio samples via shared memory?
In the multi-instance setup, I initialize the AWE instance with respective parameters. The block is identical on both cores, and I load the AWE signal chain at runtime through a connection with the tool. In the AWE server, I can see both instances in the dropdown list, but when running the AWE signal chain with IPC Buffer (where instance 0 communicates with 1, and instance 1 communicates with 0), CPU load is only visible on Core 1, and shared heap is populated with words. However, Core 2 shows no CPU load.
Could you please provide an example code and signal chain for this multi-instance use case?
Thanks,
Monika V
4:11pm
Hi Monika,
From what I can determine, you don't seem to have a current Audio Weaver license and I cannot tell whether you have a support contract with DSP Concepts.
Can you please contact sales@dspconcepts.com with your request?
Thanks,
Gary W.