All ADC21 keynotes, panels, and talks will be available to all attendees, irrespective of ticket type. In-person sessions will be broadcast live to online attendees and online sessions will be broadcast into the venue.
Please note that all times are in UTC
While it's important to celebrate what unites hardware and software engineering, the principles that divide our best practices are vital to understand too. If you or your company makes software and wants to diversify into electronics, what does the older but slower discipline teach you about improving your chances?
Since the late Twentieth Century, I've worked as a hardware and software developer, a manager, a consultant, and most lately started a modest little company that ships its own product. I've participated in the rise of Agile, reluctantly accepted silos between engineering disciplines, been involved with hits and flops, and watched decisions (some brilliant, some ruinous) play themselves out. Even in complex and luck-heavy games like ours, certain patterns repeat. I've collected a few of these from the technical and commercial side, asked around for others, and attributed plenty to experience.
Reasonable people share an eagerness to skip stupid and obvious mistakes in favour of clever, inscrutable ones. This talk is for software specialists who are eager to tinker with the physical world for profit and pleasure.
In a real-time audio context, your code needs to not only produce the correct result, but to do so reliably in a deterministic amount of time. We need to completely avoid locks, allocations, system calls, algorithms with amortised complexity, and more.
How suitable is the C++ standard library in this context? In this talk, we will go through many of its facilities in detail. Which are safe to use in (near-)real-time contexts? Which should be avoided, and why? We will discuss well-established utilities and useful patterns as well as some less commonly known details.
This talk is a different kind of tour through the standard library – and afterwards, you will be more confident in using it in your audio code!
Gridsound is a web-based digital audio workstation. We started the project back in 2015 in an attempt to play with the Web Audio API.
Since then, we realized its potential and decided to push the software beyond as a HTML5 DAW.
In this talk, we will discuss how we built the app and the possibilities the Web Audio API offers but also the limits and issues we ran into and how we have or will face it.
As we think about using Web Assembly, we will also talk about the place of this technology to complete our DAW.
Audio Modeling aims to create a complete ecosystem of virtual instruments and audio software for live and studio musicians, music producers, and composers.
Developing cross-platform audio software comes with its fair share of challenges, one of them even requiring the collaboration of PACE to develop a tool tailored to their needs. By using JUCE as development framework and Continuous Integration (CI) as their methodology, Audio Modeling is able to keep a fast pace development process while working with a small team of developers.
Eleonora Dolif walks us through the exact steps, tools, and methodology used by the Audio Modeling team to develop their products.
Unit tests are a great way to test isolated units of code, but they can’t give you confidence about a complete application. End-to-end testing aims to test an application in close to real-world circumstances. However, the automated end-to-end testing of desktop applications presents a number of challenges.
We will demonstrate an end-to-end test system that we have developed for testing JUCE desktop applications. The tests are written in Typescript. Each test starts the application, runs a single test, then shuts the application down and logs the results.
We will discuss how we overcame some of the challenges that we faced. For example, how do you control the keyboard and mouse without special privileges? How do you stream audio on a continuous integration node that doesn’t have any audio devices? How do you read the state of the user interface in a reliable way?
In this talk I will demonstrate the steps I have taken to learn the new C++20 ranges library, how taking advantage of this library has required a mental shift in the way I write code.
We will explore what the ranges library has to offer by stepping through some basic problems to solve, gradually changing from an imperative style to something more declarative. We will look at the relationship with functional programming, and the merits and demerits of adopting this style in C++ today.
What I hope you will take away from this talk is...
- What is a `range`
- What is a `view`
- What is the relationship between `container`, `iterator`, `range`, and `view`
- How to write a basic `view`
- An introduction to the ranges library and how to use it
- How the ranges library can improve code readability, testability, and thread safety
- Examples, resources, and tips, to help you adopt the ranges library
- An overview of some of the difficulties you might face adopting the ranges library today
Spatial Audio is on the threshold of the mainstream. Many of its backers are hoping for it to challenge stereo’s dominant position as the de-facto format. Yet many challenges exist in giving spatial audio a chance. In this talk, we look at the technical challenges that need to be overcome if we’re going to deliver on the opportunities that come with making spatial audio the next chapter in audio history.
Modern console, mobile and VR games typically juggle tens of thousands of compressed samples at once, playing a select hundred or more via a cascade of codecs, filters and DSP effects, updated tens of times per second. Ambisonics, granular synthesis, multiple reverbs, psychoacoustic analysis, live and interleaved streams, are all thrown into the mix.
Game audio programmers design, create and imbed an automated mix engineer for each game. Days or years later, player(s) call the shots, place the camera and tailor a custom mix the sound designers have never heard, which is consistent, informative and immersive.
This talk explains the audio tech stack used on consoles, VR and mobile devices. It draws on decades of professional game programming, explaining (amongst other things) why you can never have enough voices, how to manage when 96 tyres bite the asphalt on the first corner of a Formula 1 simulation, the merits of multiple listeners, why most game sounds play at pitches other than that recorded, which standards are customarily ignored and why, how audio is the most real-time part of game software, and why only the worst case matters.
We'll start with a hypothesis that the way we approach writing modern audio software and DSP will significantly change in the near future. In particular, the imperative, object oriented programming model that dominates modern audio software will be replaced by a more declarative, functional programming model.
We'll explore this hypothesis with some common approaches to writing audio DSP using C++, demonstrating what works and what doesn't. We'll note that every digital signal is a function of time, and often must be stateful: we need signals that change over time. Here, the object oriented model in C++ can be quite helpful, but carries with it complexity that limits our ability to compose larger processes from smaller processes, and engages the author with the "how" of the system when only the "what" really matters.
Cornflower - a mobile, audio-app prototype - is designed following the principle of what I call Sp/ARC - i.e. Specification and Audiation in Real-time Composition. It is meant to function as an improvisatory composition and recording environment.
The presentation will focus on the following:
- The interaction between Specification and Audiation in live coding
- The motivation for creating a personalised mobile composition and performance software
- The influence of other music programming and performance software
- The search for an appropriate framework to build the software
- The importance of visual aesthetics
- My response to user input in terms of features included and excluded
- The importance of the constraints built into the app’s design for fostering a productive creative experience
- The constraints imposed by the mobile platform and the issue of multiple form factors and hardware responsivity
- The role of physical modelling and timbre in the app experience
- General conclusions about mobile audio software development and hopes for the future
Are you getting the most efficiency out of your DSP code? Do you want to speed up your code by 8x?
Leveraging SIMD through processor intrinsics is one of the best ways to speed up your code, so you squeeze the most performance out of each CPU cycle, which ultimately gives users more headroom to be more creative, and that's what we're all here for.
In this talk we'll walk through optimising a standard modern audio convolution algorithm with processor intrinsics to get next-level performance that the likes of Native Instruments are utilising in their Massive X plugin to deliver top-level performance to their users.
Due to the dynamic nature of complex audio signals, objectively measuring and reasoning about properties like perceived loudness, dynamics or spectral balance is a surprisingly challenging task. At best, we might usually average some short-term measurement over a longer time (e.g. when measuring LUFS). In such a process, a lot of interesting information can get lost, especially for audio recordings outside the realm of modern mainstream music.
For practitioners in the creative process of music recording, production and mastering, judging dynamics and determining how to manipulate them (e.g. using a compressor) is among the most difficult things to learn. The process requires drudgingly acquired experience and critical listening skills, while visual aids are mostly limited to observing short-term cues such as level meters or real-time analyzers.
This talk introduces some new methods for visualizing, analyzing and manipulating audio, which allow for a more meaningful and intuitive assessment of what's actually going on in practical music signals. Examples of music from different genres and epochs are shown and discussed, as well as single instrument and vocal tracks.
No magic numbers, no strange nested filters, no tricky tuning.
Presenting a clean and flexible approach to writing a smooth high-quality reverb, using a variation on the classic feedback-delay network (FDN) structure.
We have been maintaining a fork of JUCE with many changes and fixes that we require for our products.
But as JUCE advances in a fast pace, maintaining our branch while merging it with the latest improvements and changes from JUCE has proven to be somewhat of a challenge.
In this talk I'll describe the tools and techniques that we use to resolve all the merge conflicts while keeping our sanity, such as: git-mediate, sub-merge, and more.
We explore the use of expression templates for the zero-overhead composition of small units of DSP.
Our primary aim is to approach the low-friction, fine-granularity expressiveness of DSL systems like FAUST, while staying fully within the confines of C++ and allowing the full use of its type system. With this, the final graph construct is a simple C++ class, which can be instantiated and exercised without any scaffolding or external tooling. The conceptual machinery can be uniform, where tunable algorithm parameters quickly reduce to ordinary graph data and are processed themselves in one flat graph space.
The idea naturally extends to the processing of arbitrary data types through the graph - such as abstract control types, and types supporting block-based and frequency-domain processing. Further, it is trivial to embed the resulting classes into plugins via template adapters. Finally, the expression-template paradigm enables global optimization of the result.
When using such a system, development can be a very low-friction process, granularity can be fine, and the exact same user code can be embedded into both real-time and offline contexts without special affordances.
Can a text-driven interface replace and improve over DAW workflows, without sacrificing quality ? If DAW primitives were provided as building material to the musician, how easy is it for musicians to program their own workflow ?
- Music Loop with Eval
- Using observers to control samples and web audio nodes
- Using hexadecimals and binary for a compact sequencer notation
- Timers and arrays for dynamic automation
- Side chain any parameter with code
- Build your own UI
- Get a VJ for free by using butterchurn and p5
- Use state to compose notes relative to other notes
- Use reminders for cuepoints
- Melody is just a constraint
- Fit a song inside a url parameter and share tracks
- Simplify sample use with urls
- Your DAW is cool, but does it have version control ?
Wireless audio was one of the first ever applications of Bluetooth technology. It was originally based on the older Bluetooth BR/EDR and provides one-to-one audio streaming between two devices such as a smartphone and a pair of headphones.
LE Audio is a new Bluetooth audio technology. It uses the more efficient Bluetooth Low Energy (LE), has a new and better audio codec and supports completely new use cases such as audio sharing and broadcast audio. The new standard is defined in a series of technical specifications which collectively provide a generic audio framework for next generation wireless audio products.
This talk will review LE Audio and its constituent technical parts and specifications.
Microcosmos is a small (130X80mm) open-source electronic board, aimed at prototyping electronic musical instruments and learning electronics, microcontroller programming and audio DSP.
The Web Audio API allowed web apps to synthesize sound, add effects, and generally create and transform sound in ways that were previously inconceivable on the web. With WebAssembly and AudioWorklets now in the mix, popular music creation platforms are adding the Web as a development target! We'll survey the current scene, we'll show you how to port your native C++ apps to the Web, and we'll discuss what's coming next to a browser near you.
We (Output) present our approach to rapid prototyping through the combination of web technologies and JUCE/C++.
Building on our previous work using WebViews for plugin UIs, we will demonstrate how we combine web technologies (TypeScript, HTML, CSS) and APIs (Web Audio, Web MIDI) with JUCE and C++ to prototype new features and products.
We will discuss the journey of a major new feature, from concept to prototype, to final implementation; demonstrating how we combine web technologies, JUCE/C++, and visual design tools to iterate on a concept and deliver a complex feature with high confidence that the final version would meet our expectations.
We will discuss how web tech fits into our prototyping process, the tools we use, and the benefits and limitations of this approach.
Please note that all times are in UTC
There is no such thing as neutral technology. What DAWs, synthesisers, virtual instruments, audio effects plugins, notation programs, even AI- and machine learning models have in common is that they are almost all exclusively based on western music theory, concepts and perspectives.
Cultural bias inscribed in technology mirrors the bias that runs through Western music theory, which to this day has not yet succeeded to address and dismantle its non-neutrality and the colonial framework that informed many of its canonical 19th century works.
Things could be different. Technologies could equally be applied to embrace all musical cultures, yet these possibilities are rarely implemented, and when so, then mostly to mere symbolic effect.
This presentation will focus on the possibilities for transcultural music technologies by asking: Which structural and technological changes are necessary to allow for more liberated, creative, and culturally balanced processes of music-making? How can we focus on the relational character of all creative processes and their value to us, and emphasise the unique ways creativity connects and responds to our times and contexts?
Many species of vocoders exist, with vastly different technology and purpose. Sadly, only a fraction of this technology has been used - or abused - for musical purposes so far. My talk shall focus on a number of available vocoder technologies and their potential for use in music, with many live examples but also some math and code. The goal is to offer an entertaining and inspiring talk for all skill levels.
A group of opinionated expert programmers will argue over the right and wrong answers to a selection of programming questions which have no right or wrong answers.
We'll aim to cover a wide range of topics such as: use of locks, exceptions, polymorphism, microservices, OOP, functional paradigms, open and closed source, repository methodologies, languages, textual style and tooling.
The aim of the session is to demonstrate that there is often no clear-cut best-practice for many development topics, and to set an example of how to examine problems from multiple viewpoints.
Continuous Integration and Continuous Deployment (CI/CD) are software development practices that are useful for maintaining code quality, and catching bugs before they affect end-users. This talk will discuss why CI/CD can be helpful for teams developing audio plugins, and compare some of the tools that are available for creating CI/CD pipelines. Finally, the talk will demonstrate some workflows for accomplishing various CI/CD tasks in the context of audio plugins.
The ADC Open Mic Night is back! A fun, informal evening with lightning talks, music performances, and some impromptu standup comedy, hosted by Timur Doumler.
If you are attending the ADC on site, you can contribute to the Open Mic night with a 5 minute talk or performance! Submit your idea here: https://forms.gle/QtSLM9JueCQ55A6Z7
This is an event exclusively for on-site attendees. It won't be recorded, published, or streamed online.
The majority of commercial synthesisers use either sample or subtractive techniques. Additive is rarer, but offers significant capabilities which are hard or impossible to replicate any other way. This talk explores additive synthesis, the principals, and how to implement banks of oscillators efficiently using the CORDIC algorithm. We will go on to discuss techniques for generating additive patches from samples, and some of the unusual modulation techniques it offers.
We all know how to play/capture audio from a single audio interface with our favourite audio API. But how do you play/capture audio synchronously across multiple audio interfaces, computers, local networks or even the internet? The basic principle is always the same and can roughly be split into three distinct tasks:
1. Query the current presentation/capture time of each audio interface
2. Predict and convert between presentation/capture times of different clock domains using mathematical models
3. Control the playback/capture rate of each audio interface.
After a brief introduction, this talk will examine each of the above tasks in detail and how various algorithms and techniques apply to different synchronisation applications. The listener will benefit from a practical focus, by learning how various industry standards approach the problem (AVB, AirPlay, RTP, …), which APIs are available on different platforms and various practical considerations when using WiFi and/or ethernet as a transport to synchronise audio.
The talk will end with a case study on how the author helped achieve <10μs audio playback/capture synchronisation accuracy via WiFi on the Syng Cell Alpha.
Music is widely and generally defined as an art that consists in devising and producing structured sequences of sounds. These latter are physical vibratory phenomena, characterized by some properties – i.e. propagation speed, etc. – which establish a strict relation between the phenomenon itself and the space plus time dimensions.
Interactions between musicians and listeners during a music performance are consequently delimited by both those dimensions. Despite this, is it nonetheless possible to reduce how much space and time affect the performative acts?
In this talk, the concept of Networked Music Performance (NMP) will be introduced as a solution to the issue in question. In order to demonstrate the validity of that solution, an ecosystem for NMP named MuSNet will be presented to audience and its entire implementation process plus subsequent utilizations will then be addressed and described as a case study.
What if "remote jamming" didn't require any additional software to be installed? What if we could collaborate across vast distances in real-time with just a web browser?
Thanks to modern web standards, we can!
Using WebRTC for real-time multimedia and data, and Web MIDI to control some hardware, we can create and collaborate like we're in the same room.
Developing audio hardware comes with a unique set of hardware and firmware challenges that, to a beginner, may present itself as an impossible task. However, with an understanding of the basic design principles and tools, developing your own hardware can become an increasingly feasible prospect. While drawing examples from the development process of [BitMasher](https://www.meoworkshop.org/silly-audio-processing-bitmasher-edition/), a hand-held audio effects game, this talk will cover some key aspects of audio hardware and firmware design including bare-metal programming, circuit design and some points on manufacturability.
Developing a wavetable oscillator requires combining knowledge and experience across several parts of digital signal processing. Information on developing wavetable oscillators is spread across the internet so it take a lot of trial and error to find the best combination of approaches for your needs.
In this talk we'll go over all you need to know to start creating fast and high quality wavetable oscillators. We'll compare the pros and cons of several different approaches used in professional synths to band limit wavetables, interpolate buffers, and optimize these using SIMD.
Recent JUCE releases have made the development landscape for Linux and Raspberry Pi much more streamlined than in the past.
This talk offers newcomers to the platform an in-depth overview of how to achieve workflows comparable to macOS and Windows development. It will discuss native development workflows for Ubuntu as well as using Ubuntu as a remote interface for the Raspberry Pi.
Source code accompanying the talk will also provide valuable resources, including example toolchains, tips and tricks, as well as project templates for build artifact types not typically needed on other platforms, such as binaries that can run with either with a local GUI, or in headless mode with a remote GUI.
Quantum computing is exciting, from challenging our ideas about the physical world to promising revolutionary applications. But how can audio developers exploit these opportunities?
This talk will present the creation of a quantum plug-in. We'll cover quantum computing, explaining seemingly counter-intuitive phenomena. Any maths required to explore this topic will be kept to a minimum.
Then, we'll switch to a product design perspective, developing our newfound module: a sequencer with real-time controls for the virtual modular environment VCV.
Afterwards, we will explore quantum addition via the quantum equivalents of bitwise operators (xor, and).
Going over to the implementation, we will plan and simulate a quantum circuit. Changes in probability amplitudes can be calculated and viewed in a three-dimensional representation of the multi-qubit statevector, which aids understanding.
Finally, we will port the prototype to a real-time VCV audio plug-in in C++.
Attendees should leave with a better understanding of quantum mechanics. In addition, they will see the development of a complete VCV plugin, from ideation to design, a prototype, and finally a working real-time product.
This free and open-source work is available at https://github.com/jcelerier/vintage
In this talk, we (SKR Audio Labs) introduce our project on creating a DAW that operates entirely in the cloud. We believe that this is the next step in the realm of audio production that will free users from numerous barriers and in turn open up the market for more creators.
Throughout this talk, we will go over how we approached the problem of moving to the cloud, the numerous technical and practical challenges, and what we do to solve and/or mitigate these issues. In addition, we cover the pros and cons of the various solutions and focuses throughout the course of development as well as how it relates to larger software architecture decisions.
The GASP project ‘Guitars with Ambisonic Spatial Performance’ investigates the design and realisation of an Immersive Guitar System. Few instruments exist that make use of spatial sound production.
GASP is an ongoing research project, where our interest in Ambisonic algorithmic research and guitar sound production is combined with off the shelf hardware and bespoke software. It is an innovative audio project, fusing the musical with the technical, combining individual string timbralisation with Ambisonic immersive sound. See: http://gaspproject.xyz/wp-content/uploads/2020/07/GASP-paper-for-Innovation-in-Music.pdf
For Ambisonic playback or monitoring, the audio is typically heard over a ring of eight (or more) loudspeakers, or alternatively over headphones using binaural reproduction, which includes future applications for Virtual Reality platforms.
Our more recent work investigates live performance applications in small or large format concert systems with Dolby Atmos.
Further information at: ‘http://gaspproject.xyz. 2021. GASP – Guitars With Ambisonic Spatial Performance [online].’
“Tracklaying” (triggering) individual sounds is accomplished using game dev and middleware tools in similar ways across many different types of games. But how are sounds choreographed to form a soundtrack in inherently chaotic, open world, sandbox games?
At Avalanche Studios, we've spent a number of years learning some right (and wrong) ways to take information from the game and use it to define meaningful moments.
We’ll show examples from past projects where using a sound designer’s idea of narrative context and the right game data helped us orchestrate better soundtracks.
Over the last few years, Rust has made leaps and bounds in establishing itself as a viable alternative to C++ for real-time audiovisual applications. [Nannou](https://nannou.cc) is a creative coding framework which aims to provide a beginner-friendly, batteries-included experience for coders and artists alike, and enable them to take advantage of the performance, expressive power, and safety guarantees of Rust.
Creative coding is the act of writing computer programs in order to create something expressive rather than something purely functional: works of art, design, architecture, or even fashion. It includes creating or manipulating images, interfacing with sensors and motors, generating musical compositions, controlling lights and lasers, and creating long-running interactive art installations, just to name a few. Nannou provides the tools to accomplish all of the above, and more.
In this talk I will give a high-level introduction to Nannou, explain the anatomy of a typical Nannou application, and walk through the process of building a simple generative music application from scratch.
This presentation demonstrates the innovations necessary for an ideal use of pulsar synthesis in live performance. The study is applied towards **Pulsar**, a VST3/AU/Stand-Alone pulsar synthesizer tailored for live performance and user control.
Pulsar synthesis has been implemented in many Electro-Acoustic works, including some by Karlheinz Stockhausen, Iannis Xenakis, Barry Truax, and Curtis Roads. Each composer accomplished pulsar synthesis with varied methods of analog and digital synthesis, but none of these methods have been optimized for live performance.
**Pulsar** balances a stripped down interface with the parameter trajectories essential to pulsar synthesis. **Pulsar** is available as VST3/AU/Stand-Alone, and is readily integrated into Digital Audio Workstations.
Optimizing pulsar synthesis for live performance will not replace the alternative, but instead push the aesthetic of Micro-Sound to develop a new dimension.
Thorsten Sideb0ard is a Scottish/American programmer working within computational art.
While living in London during the 2000s he worked at Last.fm, regularly hosted music events, and ran the record-/netlabels Highpoint Lowlife and 8bitrecs.com. Now living in San Francisco, he has a monthly radio show and runs the annual Algorithmic Art Assembly conference and music festival, working with Gray Area Foundation for the Arts. He is involved with the live coding and algorave communities, working on his own live coding audio REPL “Soundb0ard”, and has an album forthcoming on Broken20.
Soundb0ard is a REPL and live coding language for making algorithmic music.
It contains several synthesizers and effects, plus a live coding environment to manipulate these sound generators.
In this talk Thorsten will cover the evolution of the tool and its architecture, demonstrating the synthesis capabilities and the extent of the custom programming language which can be used to manipulate and control the flow and sound.