We all know how to play/capture audio from a single audio interface with our favourite audio API. But how do you play/capture audio synchronously across multiple audio interfaces, computers, local networks or even the internet? The basic principle is always the same and can roughly be split into three distinct tasks:
1. Query the current presentation/capture time of each audio interface
2. Predict and convert between presentation/capture times of different clock domains using mathematical models
3. Control the playback/capture rate of each audio interface.
After a brief introduction, this talk will examine each of the above tasks in detail and how various algorithms and techniques apply to different synchronisation applications. The listener will benefit from a practical focus, by learning how various industry standards approach the problem (AVB, AirPlay, RTP, …), which APIs are available on different platforms and various practical considerations when using WiFi and/or ethernet as a transport to synchronise audio.
The talk will end with a case study on how the author helped achieve <10μs audio playback/capture synchronisation accuracy via WiFi on the Syng Cell Alpha.