The ADC team is excited to present a hybrid conference again this year. As Covid precautions forced a completely virtual event in 2020, and a hybrid event in 2021, it was decided to continue the tradition of having both an in-personal and virtual event to accommodate the growing audio developer community. ADC will take place in person at CodeNode in London, and the virtual event will occur on the Gather Town platform.
An important piece of the conference experience is networking. To bring the social element to online attendees, the ADC team has built a virtual conference hall using the Gather platform. Virtual participants will create an avatar and enter a virtual conference hall - visit vendor booths, access talks via conference rooms, and even plan meetings with other virtual guests in private rooms.
“Both in-person and online attendees are equally welcome, and we’re looking forward to seeing everyone,” says Tom Poole, Director of JUCE. “The JUCE team has been organizing ADC for the last 8 years, and we are excited to keep the tradition this year. This conference is a constant source of inspiration, and with every event, I learn new things and make connections with new people.”
What is a digital audio signal? How do we generate them and in what ways can we manipulate and extract useful information from them? In this workshop we'll be exploring the life cycle of an audio signal from a continuous acoustic signal to a discrete digital signal. We'll explore practical methods for processing and shaping audio including:
GPU based audio processing has long been considered something of a unicorn in both the Pro Audio industry as well as the GPU industry. The potential for utilizing a GPU’s parallel architecture is both exciting and elusive, due to the number of computer science issues related to working with sequential DSP algorithm design and the fundamental differences between MIMD and SIMD devices. Now possible, GPU-processed audio can offer processing power for any audio application that is orders of magnitude greater than CPU counterparts; fulfilling a cross-industry need that has quickly arisen as digital media content adopts AI, ML, Cloud-based collaboration, virtual modeling, simulated acoustics and immersive audio (to name a few). The state of research had previously concluded that because of heavy latencies and a myriad of computer science issues, DSP on GPUs was just not possible nor preferable. Recognizing the need to create a viable, low-level standard and framework for Real-Time professional GPU audio processing, GPU AUDIO INC set out to solve these fundamental problems.
The purpose of this workshop is to give you a hands-on experience for what GPU Audio processing solves, and what it can mean for your software and the future of audio. It is a taste of the GPU Audio SDK coming soon.
In this course you will learn about the fundamental problems solved by the new GPU Audio standard, go deeper into our core technology, and learn how to incorporate Real-Time/low latency DSP algorithms into your projects. You will participate in a deep-dive hands-on tutorial in building a simple processor, implementing your own IIR processor, measure performance and playback, and “take home” the code to build an FIR processor. All made possible by the GPU Audio Scheduler.
Prerequisite(s): Familiarity with DSP algorithms and designs Familiarity with modern SWE tools (IDEs, Git, CI/CD)
During this workshop, participants will learn about digital modeling of analog circuits. This will be applied to the creation of several JUCE plug-ins. Traditional modeling techniques will be discussed along with the presentation of a circuit analysis library which automates the modeling process. This library, called "Point-To-Point Modeling," is intended for audio software developers interested in rapid prototyping and implementation of circuit modeling. Example JUCE plug-ins using the Point-To-Point library will be demonstrated, along with the process of quickly converting arbitrary schematics into C++ code.
A group of opinionated expert programmers will argue over the right and wrong answers to a selection of programming questions which have no right or wrong answers.
We'll aim to cover a wide range of topics such as: use of locks, exceptions, polymorphism, microservices, OOP, functional paradigms, open and closed source, repository methodologies, languages, textual style and tooling.
The aim of the session is to demonstrate that there is often no clear-cut best-practice for many development topics, and to set an example of how to examine problems from multiple viewpoints.
Panelists: Jason Dasent, Quintin Balsdon, Mary-Alice Stack, James Cunningham, Grace Capaldi, Harry Morley
This session, hosted by music producer, audio engineer and accessibility consultant Jason Dasent, brings together key players from across the music industry, all with a passion for accessibility. The event will take the form of a discussion panel focusing on the current state of accessibility as it relates to music technology, as well as how we can work together to take it to the next level.
Topics that will be covered include: The advancements made by several music equipment manufacturers in the last 2 years; how to inspire other music equipment manufacturers to make their products and services accessible; marketing opportunities for companies that make accessible products, and how we bridge the gap between able-bodied and differently abled music industry practitioners, leading to more collaborations and employment opportunities for professional differently abled practitioners. The event will culminate in a 20-minute performance, showing the latest in accessible music tech from keyboards to groove stations, to a fully accessible mixing system.
Throughout the conference, attendees are encouraged to visit the “Will You Be Next?” Accessibility Zone, where they can meet software engineers and managers from a variety of companies who are all already involved in accessible music tech. Visitors to the Accessibility Zone will be invited to get hands-on with all the accessible equipment that will be on display. Attendees will also be able to experience music production from recording to mastering, all with accessible equipment.
Join Derek Heimlich, Quiz Master, as he leads Quiz night on Tuesday November 15th at Code Node. Questions will range from audio programming trivia, to mathematical riddles, to some some real stumpers. Attendees form groups and compete against each other for the chance to win some incredible prizes.
Software and hardware prizes have been generously collected from our ADC22 Sponsors:
Scarlett 4i4 Audio Interfaces
Two SSL 2+ Audio Interfaces
...and lots of other incredible software prizes!
We invite all in-person ADC participants to take part in the annual tradition!
ADC Quiz Night
November 15, 2022
7:30 - 9:00pm
Code Node Lower Level
Celebrating Women in Audio
Celebrating Women in Audio is an important ADC Diversity and Inclusion Initiative. The Audio Developer Conference is committed to increasing diversity in all aspects of our community and events, and "Celebrating Women in Audio" has been a positive addition to the program every year.
This year we Celebrate Women in Audio at the Conference with two wonderful events:
Celebrating Women in Audio Reception
Tuesday, November 15
6:00pm - 7:30pm
South Park Hotel (over the road from Code Node)
Celebrating Women in Audio Working Lunch
Wednesday, November 16
12:50pm - 2pm
CAPSLOCK, Code Node
All women, and those identifying as such, will receive a WiA T-Shirt, women's fit, organic cotton. We are grateful to the sponsors of this initiative: Ableton, Focusrite, Source Elements, Softube, and Rafter Marsh
Keynotes and talks will cover a wide range of topics and will be presented both in-person and online. We are excited to present this year’s Keynotes:
The Musical Instruments of Star Trek by Astrid Bin
In the futuristic universe of Star Trek there are a lot of musical instruments, and many of them using far-future technology. The designers of these instruments never intended them to actually work, and were therefore led by their imaginations and not by the limitations of earthly technology – the opposite of the instrument design process today, where the design process tends to be heavily influenced by the affordances of the technology we have to hand. In this talk music technology researcher and theorist Astrid Bin explains how she explored this imagination-first process of instrument design by recreating an instrument, as faithfully as possible, from the show. Through the process – from discovering the instrument, to getting input from the show's original production designer, to figuring out how to make the instrument's behaviour true to the original intentions (but using primitive 21st century embedded sensors and computers) – she describes what she learned about designing real digital musical instruments through trying to recreate an imaginary one
Incompleteness is a Feature Not a Bug by David Zicarelli
If you’ve been in the music technology field for any length of time, you may have encountered the visual programming environment called Max. Maybe you’ve wondered what kind of people work on a computer program that doesn’t really seem to do anything?
In this talk David will share the unlikely story of Max’s transformative impact on both people and organizations, starting with his own life and that of his company Cycling ‘74. 25 years ago he was a reluctant entrepreneur with no real goals other than continuing to work on some cool software. Over time, he became more interested in exploring new ways of working, and realized that Max itself was an inspiration for the culture he was seeking as a software developer. Max's design and philosophy has allowed us to work as a fully remote team since the beginning with little need for planning and hierarchy. David will identify some properties common to both software and organizational architecture — many learned through trial and error — that seem to sustain creative flourishing of both people and teams. Some of these include learning to solve less than 100% of the problem, parameterizing interdependence and personal development, and building trust through innovation instead of rules. Finally, David will discuss some limitations of the Max approach and show how they’ve tried to address them in their most recent work related to code generation and export.