ADC Schedule

All ADC21 keynotes, panels, and talks will be available to all attendees, irrespective of ticket type. In-person sessions will be broadcast live to online attendees and online sessions will be broadcast into the venue.

14th November 2021

ADC 2021 Welcome Evening 6-9pm UTC

Be At One Spitalfields, 16-18 Brushfield Street, London E1 6AN (just a few minutes walk from the conference venue)

15th November 2021

Please note that all times are in UTC

08:00
09:00
10:00
11:00
12:00
13:00
14:00
15:00
16:00
17:00
18:00
19:00
20:00
21:00
22:00
Track 1
Track 1
Welcome Address
08:30 - 08:50
In-person
How to Make Hardware Without Losing Your Shirt
09:00 - 09:50
Ben Supper
In-person

While it's important to celebrate what unites hardware and software engineering, the principles that divide our best practices are vital to understand too. If you or your company makes software and wants to diversify into electronics, what does the older but slower discipline teach you about improving your chances?

Since the late Twentieth Century, I've worked as a hardware and software developer, a manager, a consultant, and most lately started a modest little company that ships its own product. I've participated in the rise of Agile, reluctantly accepted silos between engineering disciplines, been involved with hits and flops, and watched decisions (some brilliant, some ruinous) play themselves out. Even in complex and luck-heavy games like ours, certain patterns repeat. I've collected a few of these from the technical and commercial side, asked around for others, and attributed plenty to experience.

Reasonable people share an eagerness to skip stupid and obvious mistakes in favour of clever, inscrutable ones. This talk is for software specialists who are eager to tinker with the physical world for profit and pleasure.

Ben Supper

Ben has been involved with ADC since it started. He spent two decades engineering products for Cadac, Focusrite/Novation and ROLI (amongst others), and headed ROLI's R&D team in its formative years. He has refused to pick either side of the hardware/software or product/engineering divides, resists imposing such restrictions on others, and is now permanently overwhelmed by the breadth and depth of detail in the world. He was awarded a PhD in spatial psychoacoustics from the University of Surrey in 2005. Since 2018, Ben has been working as an independent consultant and inventor, has released a head tracker under the Supperware brand, and has expended serious effort trying to improve spatial audio delivery over headphones.
Using the C++ Standard Library for Real-time Audio
10:00 - 10:50
Timur Doumler
In-person

In a real-time audio context, your code needs to not only produce the correct result, but to do so reliably in a deterministic amount of time. We need to completely avoid locks, allocations, system calls, algorithms with amortised complexity, and more.

How suitable is the C++ standard library in this context? In this talk, we will go through many of its facilities in detail. Which are safe to use in (near-)real-time contexts? Which should be avoided, and why? We will discuss well-established utilities and useful patterns as well as some less commonly known details.

This talk is a different kind of tour through the standard library – and afterwards, you will be more confident in using it in your audio code!

Timur Doumler

Timur Doumler is the Developer Advocate for C++ at JetBrains. As a developer, he specialises in audio and music software. Timur is an active member of the ISO C++ standard committee, co-founder of the music tech startup Cradle, and conference chair of the Audio Developer Conference (ADC). He is passionate about building inclusive communities, clean code, good tools, low latency, and the evolution of the C++ language.

Timur holds a PhD in astrophysics and originally got into C++ when he was looking for an alternative to Fortran for performing numerical simulations of the large-scale structure of the Universe.

Gridsound: a Web Audio API DAW
11:20 - 12:10
Melanie Ducani and Thomas Tortorini
In-person

Gridsound is a web-based digital audio workstation. We started the project back in 2015 in an attempt to play with the Web Audio API.
Since then, we realized its potential and decided to push the software beyond as a HTML5 DAW.
In this talk, we will discuss how we built the app and the possibilities the Web Audio API offers but also the limits and issues we ran into and how we have or will face it.
As we think about using Web Assembly, we will also talk about the place of this technology to complete our DAW.

Melanie Ducani

Mélanie ‘Misty’ Ducani is a French developer graduated from Epitech Paris who mainly enjoys low level programming. She took part in Gridsound from the very beginning and discovered the world of web and audio programming.

Thomas Tortorini

Thomas "mr21" Tortorini is a French developer from Paris who enjoys audio programming and web design since many years. After receiving his degree from Epitech in 2016, he starts the GridSound project, an open source digital audio workstation using the most modern web APIs.
Automating the Development of Multi-Platform Products
12:20 - 12:50
Eleonora Dolif
In-person

Audio Modeling aims to create a complete ecosystem of virtual instruments and audio software for live and studio musicians, music producers, and composers. 

Developing cross-platform audio software comes with its fair share of challenges, one of them even requiring the collaboration of PACE to develop a tool tailored to their needs. By using JUCE as development framework and Continuous Integration (CI) as their methodology, Audio Modeling is able to keep a fast pace development process while working with a small team of developers. 

Eleonora Dolif walks us through the exact steps, tools, and methodology used by the Audio Modeling team to develop their products. 

Eleonora Dolif

Eleonora Dolif is an expert C++ developer with a Master’s degree in Computer Music Science from the University of Milan. Fascinated by music since she was a child, Eleonora studied piano from the age of 6. As a young adult, she became part of a small band. From that moment, she discovered a passion for the technological side of music. Through the years, her interest in sound manipulation and sound physics grew. In 2014, she enrolled in the Computer Music Science program at the University of Milan. In parallel, she enrolled in the 4CMP Academy in Milan where she obtained the title of Recording Studio Assistant Engineer. This gave her hands-on experience of working in recording studios in addition to her University studies. After obtaining her certificate, she continued working at 4CMP as a lecturer for their “Creative Arts Research Skills for the Higher National Diploma in Music Technology” course. As her studies at the University continued, she met Stefano Lucato who introduced her to the world of audio programming. In 2018, she graduated from the University of Milan with a thesis titled “Study and implementation of a virtual instrument with physical modeling controlled through MPE technology”. She now works at Audio Modeling as a core developer of SWAM with Emanuele Parravicini, Stefano Lucato, and the rest of the team.
Git-fu: Challenges and Solutions in Maintaining an Active Fork of JUCE
14:00 - 14:50
Yair Chuchem
In-person

We have been maintaining a fork of JUCE with many changes and fixes that we require for our products.

But as JUCE advances in a fast pace, maintaining our branch while merging it with the latest improvements and changes from JUCE has proven to be somewhat of a challenge.

In this talk I'll describe the tools and techniques that we use to resolve all the merge conflicts while keeping our sanity, such as: git-mediate, sub-merge, and more.

Yair Chuchem

Yair is a programmer with a passion for good code, math, and algorithms. Has worked in the audio industry for more than 10 years as a co-founder of Sound Radix. Apart from software, enjoys practicing amatuer acrobatics.
End-to-end Testing of a JUCE Desktop Application
15:00 - 15:50
Joe Noël
In-person

Unit tests are a great way to test isolated units of code, but they can’t give you confidence about a complete application. End-to-end testing aims to test an application in close to real-world circumstances. However, the automated end-to-end testing of desktop applications presents a number of challenges.

We will demonstrate an end-to-end test system that we have developed for testing JUCE desktop applications. The tests are written in Typescript. Each test starts the application, runs a single test, then shuts the application down and logs the results.

We will discuss how we overcame some of the challenges that we faced. For example, how do you control the keyboard and mouse without special privileges? How do you stream audio on a continuous integration node that doesn’t have any audio devices? How do you read the state of the user interface in a reliable way?

Joe Noël

Joe is a Software Developer at Focusrite. During the last eight years, Joe has worked on various projects across the Focusrite, Novation, and Ampify brands. These have ranged from desktop development for macOS and Windows, mobile development for iOS, and embedded development for ARM processors.
Session To Be Confirmed
16:20 - 16:50
Jonathan Sandman, Audiotonix
In-person

Jonathan Sandman, Audiotonix

Keynote
17:00 - 17:50
Ruth John
In-person
Keynote

Ruth John

Ruth is a creative engineer with a web development background. Her career spans twenty years working on websites, applications and most recently interactive art projects, especially those featuring audio. She also educates people and enjoys talking about new web technologies, inspiring others to try them. She works as a technical writer at Mozilla, is a founding member of { Live : JS } and co-host the Generative Artistry podcast.
Evening Meal & Networking
18:00 - 19:30
ESC/Spacebar - Codenode
In-person

ESC/Spacebar - Codenode

The ADC Quiz
19:30 - 21:00
ESC/Spacebar - Codenode
In-person

ESC/Spacebar - Codenode

Networking
21:00 - 22:00
ESC/Spacebar - Codenode
In-person

ESC/Spacebar - Codenode

Track 2
Track 2
Learning Ranges (a Paradigm Shift)
09:00 - 09:50
Anthony Nicholls
In-person

In this talk I will demonstrate the steps I have taken to learn the new C++20 ranges library, how taking advantage of this library has required a mental shift in the way I write code.

We will explore what the ranges library has to offer by stepping through some basic problems to solve, gradually changing from an imperative style to something more declarative. We will look at the relationship with functional programming, and the merits and demerits of adopting this style in C++ today.

What I hope you will take away from this talk is...

- What is a `range`
- What is a `view`
- What is the relationship between `container`, `iterator`, `range`, and `view`
- How to write a basic `view`
- An introduction to the ranges library and how to use it
- How the ranges library can improve code readability, testability, and thread safety
- Examples, resources, and tips, to help you adopt the ranges library
- An overview of some of the difficulties you might face adopting the ranges library today

Anthony Nicholls

Anthony is a Senior software developer at Focusrite. He manages a team of C++ software developers who work on a range of products, many of which interact directly with hardware such as the popular Scarlett interfaces.
Previous to Focusrite Anthony worked at Sonnox, working on JUCE plug-in development, PACE Eden integration, CI, and much more.
A Fresh Look at Spatial and nextgen Audio
10:00 - 10:50
Stefan Kazassoglou and Garry Haywood
In-person

Spatial Audio is on the threshold of the mainstream. Many of its backers are hoping for it to challenge stereo’s dominant position as the de-facto format. Yet many challenges exist in giving spatial audio a chance. In this talk, we look at the technical challenges that need to be overcome if we’re going to deliver on the opportunities that come with making spatial audio the next chapter in audio history.

Stefan Kazassoglou

Stefan is an audio engineer and coder with audio post-production credits in music, cinema and art. After graduating from LIPA in 1998 he opened the first all-digital studio in Liverpool. Later he was a founder at BinaryCell, the world’s first Surround Sound Nightclub and Studio production complex, where he designed and built the audio systems infrastructure. He loves nothing more than solving deep problems in both production methodology and audio signal processing. He is currently a founder and Chief Audio Research Officer at Kinicho, an early-stage start-up specialising in volumetric spatial audio. He also plays the bass, loves chess and distance swimming.

Garry Haywood

Garry has been a coder, systems architect & consultant for over 3 decades in a range of industries. He has also been a DJ, Music Maker and artist. He first started playing records and making music in the 1980s. He loved plugging synths and other equipment together to find out what noise they could make. He can still be found creating audio feedback loops using mobile phones and Zoom. He is currently CEO/CTO of at Kinicho, an early-stage start-up specialising in volumetric spatial audio.
Inside Modern Game Audio Engines
11:20 - 12:10
Simon N Goodwin
In-person

Modern console, mobile and VR games typically juggle tens of thousands of compressed samples at once, playing a select hundred or more via a cascade of codecs, filters and DSP effects, updated tens of times per second. Ambisonics, granular synthesis, multiple reverbs, psychoacoustic analysis, live and interleaved streams, are all thrown into the mix.

Game audio programmers design, create and imbed an automated mix engineer for each game. Days or years later, player(s) call the shots, place the camera and tailor a custom mix the sound designers have never heard, which is consistent, informative and immersive.

This talk explains the audio tech stack used on consoles, VR and mobile devices. It draws on decades of professional game programming, explaining (amongst other things) why you can never have enough voices, how to manage when 96 tyres bite the asphalt on the first corner of a Formula 1 simulation, the merits of multiple listeners, why most game sounds play at pitches other than that recorded, which standards are customarily ignored and why, how audio is the most real-time part of game software, and why only the worst case matters.

Simon N Goodwin

Simon N Goodwin has been creating games and audio tech professionally since the 1970s. In twelve years as Principal Audio Programmer with Codemasters he was responsible for the audio systems in multi-million selling simulations like F1, Colin McRae Rally and RaceDriver Grid, scoring six European number ones and two BAFTA awards. Since helping to found the Audio For Games international conference series in 2009 Simon has written a book for Focal Press, Beep to Boom, tracing the history and implementation of interactive audio from binary clicks to Higher Order Ambisonics.
Session To Be Confirmed
12:20 - 12:50
Mayk.it
In-person
Why the Future of Audio Is Functional
14:00 - 14:50
Nick Thompson
In-person

We'll start with a hypothesis that the way we approach writing modern audio software and DSP will significantly change in the near future. In particular, the imperative, object oriented programming model that dominates modern audio software will be replaced by a more declarative, functional programming model.

We'll explore this hypothesis with some common approaches to writing audio DSP using C++, demonstrating what works and what doesn't. We'll note that every digital signal is a function of time, and often must be stateful: we need signals that change over time. Here, the object oriented model in C++ can be quite helpful, but carries with it complexity that limits our ability to compose larger processes from smaller processes, and engages the author with the "how" of the system when only the "what" really matters.

I'll then introduce Elementary, an audio engine, runtime, and framework for writing audio apps in a functional, declarative model with JavaScript. I'll show what it looks like to work with Elementary, and explain how the runtime maps this model onto the underlying audio engine to deliver a fast and intuitive approach to writing and delivering audio applications.

Nick Thompson

Nick Thompson is an audio software developer, contractor, and consultant. He is the owner of a small audio plugin company, Creative Intent, and the author of Elementary Audio and React-JUCE. Nick's interest lies in tools that enable and promote creativity and simplicity, both in music making and in software development.
Cornflower: a Cross-platform, List-based, Real-time Composition Environment for Mobile
15:00 - 15:50
Ron Herrema
In-person

Cornflower - a mobile, audio-app prototype - is designed following the principle of what I call Sp/ARC - i.e. Specification and Audiation in Real-time Composition. It is meant to function as an improvisatory composition and recording environment.

The presentation will focus on the following:

- The interaction between Specification and Audiation in live coding
- The motivation for creating a personalised mobile composition and performance software
- The influence of other music programming and performance software
- The search for an appropriate framework to build the software
- The importance of visual aesthetics
- My response to user input in terms of features included and excluded
- The importance of the constraints built into the app’s design for fostering a productive creative experience
- The constraints imposed by the mobile platform and the issue of multiple form factors and hardware responsivity
- The role of physical modelling and timbre in the app experience
- General conclusions about mobile audio software development and hopes for the future

Ron Herrema

Ron Herrema is a composer of music, image, sound and code. In recent years he has published interactive research regarding his concept of Code as Prosthesis; developed three iPhone app/artworks; co-produced the winning entry at The Tate Modern's art hackathon; performed free improv in East London's Chisenhale Dance Space; composed music for an award-winning documentary; created an interactive installation for the Tate Britain; and been awarded a Chagrin Award by Sound and Music. He composes both acoustic and electroacoustic music, as well as both still and moving image. He has a particular affinity for algorithmic techniques in both realms. Currently residing in Bath, England, he is a native of Grand Rapids, Michigan and received his PhD in composition from Michigan State University. He teaches Creative Computing at Bath Spa University and previously led creative tech workshops for the London-based educational startup Codasign, as well as for London Music Hackspace. He is a certified teacher of Deep Listening®, having studied for several years with Deep Listening founder Pauline Oliveros. He has a related interest in integrating contemplative practices with technology and continues to look for ways to flow musically into, with, and through the world.
Sponsored Talk
16:20 - 16:50
TBD
In-person
Women In Audio Reception
18:00 - 19:30
(off site location to be confirmed)
In-person

(off site location to be confirmed)

Track 3
Track 3
Introduction to Processor Intrinsics: Supercharge your DSP Code!
09:00 - 09:50
Jamie Pond
In-person

Are you getting the most efficiency out of your DSP code? Do you want to speed up your code by 8x?

Leveraging SIMD through processor intrinsics is one of the best ways to speed up your code, so you squeeze the most performance out of each CPU cycle, which ultimately gives users more headroom to be more creative, and that's what we're all here for.

In this talk we'll walk through optimising a standard modern audio convolution algorithm with processor intrinsics to get next-level performance that the likes of Native Instruments are utilising in their Massive X plugin to deliver top-level performance to their users.

Jamie Pond

Jamie is an audio developer, recently graduated with an MSc in Sound and Music Computing from Queen Mary University of London. He works as a contract plugin developer with Relab Development and PresentDayProduction. He specialises in writing high performance DSP code in C++, and creating meaningful products for musicians and producers.
Statistical Consequences of Fat Beats - Exploring The Dynamics of Audio Signals
10:00 - 10:50
Christian Luther
In-person

Due to the dynamic nature of complex audio signals, objectively measuring and reasoning about properties like perceived loudness, dynamics or spectral balance is a surprisingly challenging task. At best, we might usually average some short-term measurement over a longer time (e.g. when measuring LUFS). In such a process, a lot of interesting information can get lost, especially for audio recordings outside the realm of modern mainstream music.
For practitioners in the creative process of music recording, production and mastering, judging dynamics and determining how to manipulate them (e.g. using a compressor) is among the most difficult things to learn. The process requires drudgingly acquired experience and critical listening skills, while visual aids are mostly limited to observing short-term cues such as level meters or real-time analyzers.
This talk introduces some new methods for visualizing, analyzing and manipulating audio, which allow for a more meaningful and intuitive assessment of what's actually going on in practical music signals. Examples of music from different genres and epochs are shown and discussed, as well as single instrument and vocal tracks.

Christian Luther

Christian is an audio DSP expert based in Hannover, Germany. In the past, he has worked in R&D with brands such as Access, Kemper Amps and Sennheiser. Right now, he is busy starting his own company as an independent plugin inventor and R&D engineer, which doesn't even have a proper name yet.
Let's Write a Reverb
11:20 - 12:10
Geraint Luff
In-person

No magic numbers, no strange nested filters, no tricky tuning.
Presenting a clean and flexible approach to writing a smooth high-quality reverb, using a variation on the classic feedback-delay network (FDN) structure.

Geraint Luff

Geraint grew up with a strong interest in music, maths and programming. He now heads up Signalsmith Audio, a small company which provides custom audio/DSP algorithm design and implementation, as well as developing their own line of audio plugins.
Sponsored Talk
12:20 - 12:50
TBD
In-person
Making Audio More Accessible
14:00 - 14:50
Online
Panel
From the Ground Up: Developing Audio Hardware from Scratch
15:00 - 15:50
Allen Lee
Online

Developing audio hardware comes with a unique set of hardware and firmware challenges that, to a beginner, may present itself as an impossible task. However, with an understanding of the basic design principles and tools, developing your own hardware can become an increasingly feasible prospect. While drawing examples from the development process of [BitMasher](https://www.meoworkshop.org/silly-audio-processing-bitmasher-edition/), a hand-held audio effects game, this talk will cover some key aspects of audio hardware and firmware design including bare-metal programming, circuit design and some points on manufacturability.

Allen Lee

Allen Lee is a Canadian hardware and software developer specializing in creating interactive devices. He has previously worked in the consumer electronics industry developing test and calibration systems for optical and motion sensors. He has since decided to focus on fusing audio, hardware and software to create whimsical audio things.
Sponsored Talk
16:20 - 16:50
TBD
In-person
Track 4
Track 4
Exploring JavaScript Hacks for Modern Electronic Music Production
09:00 - 09:50
Xyzzy
Online

Can a text-driven interface replace and improve over DAW workflows, without sacrificing quality ? If DAW primitives were provided as building material to the musician, how easy is it for musicians to program their own workflow ?

The talk will explore things I have discovered while building bitrhythm, a tiny IDE for making music using tone.js and javascript libraries. I will talk about my frustrations with existing DAWs and how using javascript can make electronic music production approachable. There will be some music theory and live demos of techno/lofi/dnb (like this) that will be deconstructed. I will be showcasing the following ideas:

- Music Loop with Eval
- Using observers to control samples and web audio nodes
- Using hexadecimals and binary for a compact sequencer notation
- Timers and arrays for dynamic automation
- Side chain any parameter with code
- Build your own UI
- Get a VJ for free by using butterchurn and p5
- Use state to compose notes relative to other notes
- Use reminders for cuepoints
- Melody is just a constraint
- Fit a song inside a url parameter and share tracks
- Simplify sample use with urls
- Your DAW is cool, but does it have version control ?

Xyzzy

Xyzzy is a web developer based out of India. In his spare time he messes with techno production with electribes, renoise and reaper. He is currently experimenting with game music, loopers, web audio and vst internals for his next album. He is also working on web apps to help beginners learn electronic music.
LE Audio - The New Standard in Bluetooth® Audio Technology
10:00 - 10:50
Martin Woolley
Online

Wireless audio was one of the first ever applications of Bluetooth technology. It was originally based on the older Bluetooth BR/EDR and provides one-to-one audio streaming between two devices such as a smartphone and a pair of headphones.

LE Audio is a new Bluetooth audio technology. It uses the more efficient Bluetooth Low Energy (LE), has a new and better audio codec and supports completely new use cases such as audio sharing and broadcast audio. The new standard is defined in a series of technical specifications which collectively provide a generic audio framework for next generation wireless audio products.

This talk will review LE Audio and its constituent technical parts and specifications.

Martin Woolley

Martin Woolley works for the Bluetooth SIG, the technical standards body for Bluetooth® technology. He’s an industry veteran with over 30 years’ experience and has a degree in Computing and Mathematics. Martin is the Bluetooth SIG's Senior Developer Relations Manager for the EMEA region and is responsible for informing, educating and supporting developers in the region. More importantly, he owns a Roland SH09, a Korg Poly 61 and an Akai S950.
Make Electronic Musical Instruments With Faust and Microcosmos
11:20 - 12:10
Daniele Pagliero
In-person

Microcosmos is a small (130X80mm) open-source electronic board, aimed at prototyping electronic musical instruments and learning electronics, microcontroller programming and audio DSP.

Daniele Pagliero

Daniele Pagliero is a software engineer with 20 years of experience programming for experimental projects in the fields of interaction design, art exhibitions, museum installations, gaming and mobile. At Faselunare he is researching and developing audio software for embedded devices. Alongside the activity at Faselunare he is also the lead developer at Nextatlas, a company that works in the field of trend forecasting, dealing with big data and AI. He is also a musician (he plays bass and live electronics) and released many records and toured Europe several times.
Sponsored Talk
12:20 - 12:50
TBD
In-person
Creating Music on the Web
14:00 - 14:50
Ben Morss and Hongchan Choi
Online

The Web Audio API allowed web apps to synthesize sound, add effects, and generally create and transform sound in ways that were previously inconceivable on the web. With WebAssembly and AudioWorklets now in the mix, popular music creation platforms are adding the Web as a development target! We'll survey the current scene, we'll show you how to port your native C++ apps to the Web, and we'll discuss what's coming next to a browser near you.

Ben Morss

Ben is a Developer Advocate and Product Manager at Google, where he’s working to improve the web for developers and users alike. Prior to Google, he worked at the New York Times and AOL, and before that he was a full-time musician. He earned a BA in Computer Science at Harvard and a PhD in Music at the University of California at Davis. He also played with bands like Cake and wrote two musicals based on the Angelina Ballerina television series. Rumor is that he still runs a band called Ancient Babies, and he may also be writing a musical that’s not really about Steve Jobs.

Hongchan Choi

Hongchan is a Technical Lead of the Chrome Web Audio team and a co-chair of W3C Audio Working Group. He codes, writes, and speaks about Web Audio. His mission at Google is making audio better on the web - also cares about building a healthy ecosystem with developers and industry partners. Before Google Hongchan studied computer music at CCRMA, Stanford University. Even before that, he spent years creating and teaching music in South Korea.
Hybrid Prototyping With Web Tech and JUCE/C++
15:00 - 15:50
Arthur Carabott
Online

We (Output) present our approach to rapid prototyping through the combination of web technologies and JUCE/C++.

Building on our previous work using WebViews for plugin UIs, we will demonstrate how we combine web technologies (TypeScript, HTML, CSS) and APIs (Web Audio, Web MIDI) with JUCE and C++ to prototype new features and products.

We will discuss the journey of a major new feature, from concept to prototype, to final implementation; demonstrating how we combine web technologies, JUCE/C++, and visual design tools to iterate on a concept and deliver a complex feature with high confidence that the final version would meet our expectations.

We will discuss how web tech fits into our prototyping process, the tools we use, and the benefits and limitations of this approach.

Arthur Carabott

Arthur Carabott is a UX Engineer and Project Lead at Output, working across design and development he creates new products and features through prototyping. Previously he worked on the Bronze generative music format and created interactive music experiences, including an AR app with composer Anna Meredith, a spatial installation for the Apple Store with 100 synchronized iPads backing the vocalist Chagall, and a pavilion / giant musical instrument for the London Olympics with Asif Khan.
Sponsored Talk
16:20 - 16:50
TBD
In-person

16th November 2021

Please note that all times are in UTC

09:00
10:00
11:00
12:00
13:00
14:00
15:00
16:00
17:00
18:00
19:00
20:00
21:00
22:00
Track 1
Track 1
Cultural Bias in Music Technology
09:00 - 09:50
Khyam Allami
In-person

There is no such thing as neutral technology. What DAWs, synthesisers, virtual instruments, audio effects plugins, notation programs, even AI- and machine learning models have in common is that they are almost all exclusively based on western music theory, concepts and perspectives.

Cultural bias inscribed in technology mirrors the bias that runs through Western music theory, which to this day has not yet succeeded to address and dismantle its non-neutrality and the colonial framework that informed many of its canonical 19th century works.

Things could be different. Technologies could equally be applied to embrace all musical cultures, yet these possibilities are rarely implemented, and when so, then mostly to mere symbolic effect.

This presentation will focus on the possibilities for transcultural music technologies by asking: Which structural and technological changes are necessary to allow for more liberated, creative, and culturally balanced processes of music-making? How can we focus on the relational character of all creative processes and their value to us, and emphasise the unique ways creativity connects and responds to our times and contexts?

Khyam Allami

Khyam Allami is an Iraqi-British multi-instrumentalist musician, composer, researcher and founder of Nawa Recordings. He holds a BA and Masters in Ethnomusicology from SOAS, University of London, and is currently completing an AHRC/M4C funded PhD in composition at the Royal Birmingham Conservatoire.
Vocoder Taxonomy
10:00 - 10:50
Stefan Stenzel
In-person

Many species of vocoders exist, with vastly different technology and purpose. Sadly, only a fraction of this technology has been used - or abused - for musical purposes so far. My talk shall focus on a number of available vocoder technologies and their potential for use in music, with many live examples but also some math and code. The goal is to offer an entertaining and inspiring talk for all skill levels.

Stefan Stenzel

Stefan Stenzel joined Waldorf Electronics in the 90s and soon became Director of R&D. He introduced digital signal processing to the company, which previously relied on third party ASICs and analog technology. He contributed to projects like the Waldorf Wave, Microwave 2 and Q, and provided concepts for many synthesizers like Pulse, Blofeld, Streichfett and STVC, to name just a few. In 2006, after the bankruptcy of Waldorf Electronics, he co-founded Waldorf Music GmbH and became CEO and CTO. He left the company in 2017 and now works as consultant and development contractor for various companies. As part of his research, Stefan developed some novel and unique algorithms for audio signal processing, some of those part of actual products. Inspired by a folder on his hard drive with various variations of vocoders, he came up with the idea to present a talk about vocoder taxonomy.
Morning Break
10:50 - 11:20
Break
Tabs or Spaces?
11:20 - 12:10
Julian Storer and Dave Rowland
In-person

A group of opinionated expert programmers will argue over the right and wrong answers to a selection of programming questions which have no right or wrong answers.

We'll aim to cover a wide range of topics such as: use of locks, exceptions, polymorphism, microservices, OOP, functional paradigms, open and closed source, repository methodologies, languages, textual style and tooling.

The aim of the session is to demonstrate that there is often no clear-cut best-practice for many development topics, and to set an example of how to examine problems from multiple viewpoints.

Julian Storer

Jules is a developer and founder who has created several audio technologies and companies in his 20+ year career. He's best known for creating JUCE, Tracktion and SOUL.

Dave Rowland

Dave Rowland is the director of software development at Audio Squadron (owning brands such as Tracktion and Prism Sound), working primarily on the digital audio workstation, Waveform and the engine it runs on. Other projects over the years have included audio plugins and iOS audio applications utilising JUCE. In academia, David has taught on several modules at the University of the West of England on programming for audio. David has a passion for modern C++ standards and their use to improve code safety and brevity, has spoken at the Meeting C++ conference and is a regular speaker at the Audio Developer Conference and related monthly meetup. Past presentations: https://github.com/drowaudio/presentations/
Focusrite
12:20 - 12:50
Speaker To Be Confirmed, Focusrite
In-person

Speaker To Be Confirmed, Focusrite

AI, Audio, Music, and the Law
14:00 - 14:50
Heather Rafter
In-person
Panel

 

 

Heather Rafter

Heather Dembert Rafter has been providing legal and business development services to the audio, music technology, and digital media industries for over twenty-five years. As principal counsel at RafterMarsh US, she leads the RM team in providing sophisticated corporate and IP advice (US practice) and entertainment law services (UK) to content creators, hardware and software developers, and event producers, among others. Heather served as Director of Legal Affairs and later General Counsel for Digidesign, the audio division of Avid Technology and the creator of the award-winning Pro Tools software. Heather was instrumental in the company’s acquisition of M-Audio and Sibelius Software and represented audio console leader Euphonix in its acquisition by Avid, as well as counsel for Imagine Research in its acquisition by iZotope, another audio industry flagship company. She began her legal career as a litigation attorney at Gibson Dunn & Crutcher, focusing on intellectual property and commercial matters. Heather served as Chair of the American Bar Association Section of Science & Technology Law and is a member of the ABA’s Fund for Justice and Education. Heather currently serves as an Advisor to Ardian, a French-based private equity firm, on the Board of Advisors of the Bob Moog Foundation and as a Senior Advisor for MediaBridge Capital, a leading investment banking and strategic advisory firm serving the media technology industry. Heather, an advocate for gender parity in the legal and music industry, also serves on the Boards of SoundGirls.org, and WiMN, the Women’s International Music Network.  A proud alumna, Heather serves on the advisory council for the Department of Music at Princeton University. Heather is a graduate of Princeton University (A.B., Woodrow School of Public & International Affairs, magna cum laude) and Columbia University School of Law (J.D., Harlan Fiske Stone Scholar). Heather’s sons both work in high tech, and her daughter is finishing her senior year at Princeton University. Heather is an avid yogi and fan of live music.
CI/CD for Audio Plugin Development
15:00 - 15:50
Jatin Chowdhury
In-person

Continuous Integration and Continuous Deployment (CI/CD) are software development practices that are useful for maintaining code quality, and catching bugs before they affect end-users. This talk will discuss why CI/CD can be helpful for teams developing audio plugins, and compare some of the tools that are available for creating CI/CD pipelines. Finally, the talk will demonstrate some workflows for accomplishing various CI/CD tasks in the context of audio plugins.

Jatin Chowdhury

Jatin is an audio signal processing engineer from Denver, Colorado, USA. For the past several years he has worked as a developer of audio effects and other music technology software. Jatin is a graduate from the Center for Computer Research in Music and Acoustics (CCRMA) at Stanford University, where he studied audio signal processing.
Afternoon Break
15:50 - 16:20
Break
Sponsored Talk
16:20 - 16:50
TBD
In-person
Reflections on human and machine creativity
17:00 - 17:50
Rebecca Fiebrink
In-person
Keynote

Rebecca Fiebrink

Dr Rebecca Fiebrink makes new accessible and creative technologies. As a Reader at the Creative Computing Institute at University of the Arts London, her teaching and research focus largely on how machine learning and artificial intelligence can change human creative practices. Fiebrink is the developer of the Wekinator creative machine learning software, which is used around the world by musicians, artists, game designers, and educators. She is the creator of the world’s first online class about machine learning for music and art. Much of her work is driven by a belief in the importance of inclusion, participation, and accessibility: she works frequently with human-centred and participatory design processes, and she is currently working on projects related to creating new accessible technologies with people with disabilities, and designing inclusive machine learning curricula and tools. Dr. Fiebrink previously taught at Goldsmiths University of London and Princeton University, and she has worked with companies including Microsoft, Smule, and Imagine Research. She holds a PhD in Computer Science from Princeton University.
Closing Address
17:50 - 18:10
In-person
Evening Meal & Networking
18:10 - 19:30
ESC/Spacebar - Codenode
In-person

ESC/Spacebar - Codenode

Open Mic Night
19:30 - 21:00
Timur Doumler
In-person

The ADC Open Mic Night is back! A fun, informal evening with lightning talks, music performances, and some impromptu standup comedy, hosted by Timur Doumler.

If you are attending the ADC on site, you can contribute to the Open Mic night with a 5 minute talk or performance! Submit your idea here: https://forms.gle/QtSLM9JueCQ55A6Z7

This is an event exclusively for on-site attendees. It won't be recorded, published, or streamed online.

Timur Doumler

Timur Doumler is the Developer Advocate for C++ at JetBrains. As a developer, he specialises in audio and music software. Timur is an active member of the ISO C++ standard committee, co-founder of the music tech startup Cradle, and conference chair of the Audio Developer Conference (ADC). He is passionate about building inclusive communities, clean code, good tools, low latency, and the evolution of the C++ language.

Timur holds a PhD in astrophysics and originally got into C++ when he was looking for an alternative to Fortran for performing numerical simulations of the large-scale structure of the Universe.

Networking
21:00 - 22:00
ESC/Spacebar - Codenode
In-person

ESC/Spacebar - Codenode

Track 2
Track 2
JUCE Development for Linux and Raspberry Pi
09:00 - 09:50
Kieran Coulter
In-person

Recent JUCE releases have made the development landscape for Linux and Raspberry Pi much more streamlined than in the past.

This talk offers newcomers to the platform an in-depth overview of how to achieve workflows comparable to macOS and Windows development. It will discuss native development workflows for Ubuntu as well as using Ubuntu as a remote interface for the Raspberry Pi.

Source code accompanying the talk will also provide valuable resources, including example toolchains, tips and tricks, as well as project templates for build artifact types not typically needed on other platforms, such as binaries that can run with either with a local GUI, or in headless mode with a remote GUI.

Kieran Coulter

Kieran began his academics with a diploma in Audio Engineering, followed by Bachelor's degrees in Music and Computer Science. While studying computer science, his interest became focused towards realtime graphics, spatial audio, and human-computer interaction. https://cycling74.com/products/max-in-education Since graduation, his career has tended towards spatial audio systems programming and its peripherals.
Synchronising Clocks - Simultaneous Audio Playback/capture on Multiple Interfaces, Devices And/or Networks
10:00 - 10:50
Fabian Renn-Giles
In-person

We all know how to play/capture audio from a single audio interface with our favourite audio API. But how do you play/capture audio synchronously across multiple audio interfaces, computers, local networks or even the internet? The basic principle is always the same and can roughly be split into three distinct tasks:

1. Query the current presentation/capture time of each audio interface
2. Predict and convert between presentation/capture times of different clock domains using mathematical models
3. Control the playback/capture rate of each audio interface.

After a brief introduction, this talk will examine each of the above tasks in detail and how various algorithms and techniques apply to different synchronisation applications. The listener will benefit from a practical focus, by learning how various industry standards approach the problem (AVB, AirPlay, RTP, …), which APIs are available on different platforms and various practical considerations when using WiFi and/or ethernet as a transport to synchronise audio.

The talk will end with a case study on how the author helped achieve <10μs audio playback/capture synchronisation accuracy via WiFi on the Syng Cell Alpha.

Fabian Renn-Giles

Fabian Renn-Giles (PhD) is a freelance C++ programmer, entrepreneur and consultant in the audio software industry. Before this, he was staff engineer at ROLI Ltd. and the lead maintainer/developer of the JUCE C++ framework (www.juce.com) - an audio framework used by thousands of commercial audio software companies. Before joining ROLI, he completed his PhD at Imperial College London, developing a numerical quantum optics solver with modern digital signal processing techniques and C++/MPI/OpenCL. During his academic career, Fabian regularly taught C++ to post- and undergraduate students in tutor groups. In 2005, Fabian co-founded the audio plug-in start-up Fielding DSP which specialises in real-time audio plug-ins for audio mastering. Fabian now regularly consults on various audio related software projects. Additionally, he is a regular speaker and/or workshop leader at the audio developer conference ADC.
Reducing Space and Time Limitations in Music Performances
11:20 - 12:10
Rodolfo Cangiotti
In-person

Music is widely and generally defined as an art that consists in devising and producing structured sequences of sounds. These latter are physical vibratory phenomena, characterized by some properties – i.e. propagation speed, etc. – which establish a strict relation between the phenomenon itself and the space plus time dimensions.

Interactions between musicians and listeners during a music performance are consequently delimited by both those dimensions. Despite this, is it nonetheless possible to reduce how much space and time affect the performative acts?

In this talk, the concept of Networked Music Performance (NMP) will be introduced as a solution to the issue in question. In order to demonstrate the validity of that solution, an ecosystem for NMP named MuSNet will be presented to audience and its entire implementation process plus subsequent utilizations will then be addressed and described as a case study.

Rodolfo Cangiotti

Rodolfo Cangiotti is a software developer at WuBook. He holds a bachelor degree with honors in Electronic Music from the Conservatory G. Rossini (Pesaro, ITA), where he also deepened his knowledge about music composition by attending additional tailored courses. He is deeply fascinated by the intersection between digital information technologies and sonic arts, as he dedicates most of its spare time to projects belonging to that field. He firmly believes in the open-source philosophy and in freely sharing knowledge.
Session To Be Confirmed
12:20 - 12:50
Moodelizer
In-person
Real-time Remote Jams With WebRTC and Web MIDI
14:00 - 14:50
Philip Miller
In-person

What if "remote jamming" didn't require any additional software to be installed? What if we could collaborate across vast distances in real-time with just a web browser?

Thanks to modern web standards, we can!

Using WebRTC for real-time multimedia and data, and Web MIDI to control some hardware, we can create and collaborate like we're in the same room.

Philip Miller

Phil has spent a lot of time building teams and software, as well as writing things to help other teams build better software. He is currently a Senior DevRel Engineer at Daily. When he's not writing docs, blogs, or demos, you can find him in his home studio playing drums or synthesizers.
C++ Expression Templates for Specifying Compile-time DSP Structures
15:00 - 15:50
Matthew Robbetts
In-person

We explore the use of expression templates for the zero-overhead composition of small units of DSP.

Our primary aim is to approach the low-friction, fine-granularity expressiveness of DSL systems like FAUST, while staying fully within the confines of C++ and allowing the full use of its type system. With this, the final graph construct is a simple C++ class, which can be instantiated and exercised without any scaffolding or external tooling. The conceptual machinery can be uniform, where tunable algorithm parameters quickly reduce to ordinary graph data and are processed themselves in one flat graph space.

The idea naturally extends to the processing of arbitrary data types through the graph - such as abstract control types, and types supporting block-based and frequency-domain processing. Further, it is trivial to embed the resulting classes into plugins via template adapters. Finally, the expression-template paradigm enables global optimization of the result.

When using such a system, development can be a very low-friction process, granularity can be fine, and the exact same user code can be embedded into both real-time and offline contexts without special affordances.

Matthew Robbetts

Matthew is a C++-focused audio developer with a passion for bringing highly expressive, functional-programming approaches to real-time systems. He previously worked for Apple, where he was responsible for audio/telephony DSP algorithm development; and currently works for Syng, on their audio DSP and architecture stack.
Sponsored Talk
16:20 - 16:50
TBD
In-person
Track 3
Track 3
Additive Synthesis Using the CORDIC Algorithm
09:00 - 09:50
Cesare Ferrari
In-person

The majority of commercial synthesisers use either sample or subtractive techniques. Additive is rarer, but offers significant capabilities which are hard or impossible to replicate any other way. This talk explores additive synthesis, the principals, and how to implement banks of oscillators efficiently using the CORDIC algorithm. We will go on to discuss techniques for generating additive patches from samples, and some of the unusual modulation techniques it offers.

Cesare Ferrari

Cesare is a developer with over 25 years experience of realtime software development. He has worked in audio since 2001 and is the co-designer of the SOUL programming language.
Practical Guide to Optimized High-Quality Wavetable Oscillators
10:00 - 10:50
Matt Tytel
In-person

Developing a wavetable oscillator requires combining knowledge and experience across several parts of digital signal processing. Information on developing wavetable oscillators is spread across the internet so it take a lot of trial and error to find the best combination of approaches for your needs.

In this talk we'll go over all you need to know to start creating fast and high quality wavetable oscillators. We'll compare the pros and cons of several different approaches used in professional synths to band limit wavetables, interpolate buffers, and optimize these using SIMD.

Matt Tytel

Matt Tytel is a synth designer and game developer living in Vermont. He is the developer of the recently released wavetable synth, Vital. Before Vital, Matt jumped around the industry working at music plugin companies (Cakewalk) and music game companies (Harmonix).
Quantum Sequencer: from Prototype to VCV Plug-in
11:20 - 12:10
George Gkountouras
In-person

Quantum computing is exciting, from challenging our ideas about the physical world to promising revolutionary applications. But how can audio developers exploit these opportunities?
This talk will present the creation of a quantum plug-in. We'll cover quantum computing, explaining seemingly counter-intuitive phenomena. Any maths required to explore this topic will be kept to a minimum.
Then, we'll switch to a product design perspective, developing our newfound module: a sequencer with real-time controls for the virtual modular environment VCV.
Afterwards, we will explore quantum addition via the quantum equivalents of bitwise operators (xor, and).
Going over to the implementation, we will plan and simulate a quantum circuit. Changes in probability amplitudes can be calculated and viewed in a three-dimensional representation of the multi-qubit statevector, which aids understanding.
Finally, we will port the prototype to a real-time VCV audio plug-in in C++.
Attendees should leave with a better understanding of quantum mechanics. In addition, they will see the development of a complete VCV plugin, from ideation to design, a prototype, and finally a working real-time product.

George Gkountouras

George Gkountouras (MSc ECE) is a software engineer, researcher and entrepreneur in the audio software industry. He believes that AI will enable the creation of state-of-the-art music technology products. He has previously appeared at the audio developer conference ADC for a lightning talk. During his academic career, George regularly taught DSP to undergraduate students. He's worked on compilers, circuit simulators and audio plug-ins. He is also interested in Android audio applications and embedded systems.
Sponsored Talk
12:20 - 12:50
TBD
In-person
Leveraging C++20 for Declarative Audio Plug-in Specification
14:00 - 14:50
Jean-Michaël Celerier
In-person

In this talk, we'll explore how a few features of recent C++ standards enable creating audio plug-ins in a declarative and data-oriented way: reflection-friendly features such as concepts and destructuring allow to invert the usual mechanism of inheriting from a base class, by instead allowing the compiler to introspect custom plug-in-specific data structures in order to minimize overhead both in terms of user code and run-time performance, as well as to improve interoperability between distinct systems and projects.

This free and open-source work is available at https://github.com/jcelerier/vintage

Jean-Michaël Celerier

Jean-Michaël Celerier (https://jcelerier.name/), born in France in 1992, is a freelance researcher, interested in art, code, computer music and interactive show control. He studied software engineering, computer science & multimedia technologies at Bordeaux, and obtained his doctorate on the topic of authoring temporal media in 2018. He develops and maintains a range of free & open-source software used for creative coding, digital and intermedia art, which he leverages in various installations and works; in particular, most of his work is centered on the ossia platform for which he is the main developer. He enjoys organizing events centered on programming and media art - most recently the Linux Audio Conference, and a C++ meetup in Bordeaux. He teaches all sorts of creative coding languages (PureData, Processing, OpenFrameworks, etc) to both computer science and graphics design students.
Cloud Computing in the Audio Space
15:00 - 15:50
Parashar Krishnamachari and Evan Brand
Online

In this talk, we (SKR Audio Labs) introduce our project on creating a DAW that operates entirely in the cloud. We believe that this is the next step in the realm of audio production that will free users from numerous barriers and in turn open up the market for more creators.

Throughout this talk, we will go over how we approached the problem of moving to the cloud, the numerous technical and practical challenges, and what we do to solve and/or mitigate these issues. In addition, we cover the pros and cons of the various solutions and focuses throughout the course of development as well as how it relates to larger software architecture decisions.

Parashar Krishnamachari

Parashar is a veteran software engineer with nearly 30 years of experience in VFX, games, films, AR/VR, etc. with a music background in Indian classical music. His work has gone from classic video games to technologies that have earned Academy Award nominations. Whether gaming, films, music, or other fields, Parashar's career has almost entirely been focused on creating tools and technologies that enable content creators to do more and work more effectively with fewer creative limitations.

Evan Brand

Evan is an American technology entrepreneur who develops products and supports causes that empower artists. He has contributed product design and strategy at @Home Network, Liberate, IBM, Hitachi, and Hewlett Packard.
Sponsored Talk
16:20 - 16:50
TBD
In-person
Track 4
Track 4
Immersive Sound for Electric Guitar: Further Developments of the GASP Project
09:00 - 09:50
Duncan Werner Bruce Wiggins Emma Fitzmaurice and Matthew Hart
Online

The GASP project ‘Guitars with Ambisonic Spatial Performance’ investigates the design and realisation of an Immersive Guitar System. Few instruments exist that make use of spatial sound production.

GASP is an ongoing research project, where our interest in Ambisonic algorithmic research and guitar sound production is combined with off the shelf hardware and bespoke software. It is an innovative audio project, fusing the musical with the technical, combining individual string timbralisation with Ambisonic immersive sound. See: http://gaspproject.xyz/wp-content/uploads/2020/07/GASP-paper-for-Innovation-in-Music.pdf

For Ambisonic playback or monitoring, the audio is typically heard over a ring of eight (or more) loudspeakers, or alternatively over headphones using binaural reproduction, which includes future applications for Virtual Reality platforms.

Our more recent work investigates live performance applications in small or large format concert systems with Dolby Atmos.

Further information at: ‘http://gaspproject.xyz. 2021. GASP – Guitars With Ambisonic Spatial Performance [online].’

Duncan Werner

Duncan graduated in Electrical/Electronic Engineering from Aston University in the late seventies, but as a keen musician moved towards the music industry gaining work as a recording and touring musician in the UK and Europe, subsequently being employed by Chrysalis Music Group as studio sound engineer. He went on to study postgraduate Music Information Technology at City University London. He has been programme leader and senior lecturer in Music Technology and Production at University of Derby for over 25 years, the last 10 years engaged with the study of auditory perception. His research interests include immersive music production, in particular the GASP project. He is now an independent researcher, continuing with spatial/immersive music production using the GASP system.

Bruce Wiggins

Bruce graduated with 1st class honours in Music Technology and Audio System Design from the University of Derby in 1999. His interest in audio signal processing spurred him to continue at Derby completing his PhD entitled "An Investigation into the Real-time Manipulation and Control of 3D Sound Fields" in 2004 where he solved the problem of generating Ambisonic decoders for irregular speaker arrays and looked at the optimisation of binaural/transaural systems. His research into Ambisonics has been featured as an impact case study in the national Research Excellence Framework in 2014 and will be again in 2021. His latest work is based around the auralisation of rooms to very high order Ambisonics with head-tracking.

Emma Fitzmaurice

Emma Fitzmaurice graduated from the University of Derby UK in 2019, after completing BSc(Hons) Music Technology and Production, then followed by MSc Audio Engineering. It was here she gained an interest in spatial audio and the GASP project, making significant contributions to GASP’s spatial control system. She now works in Quality Assurance on Novation products at Focusrite and continues to work on the development of the GASP project in her spare time.

Matthew Hart

Matthew Hart is an English guitarist and graduate of Music Technology and Production from the University of Derby; he is currently studying MA Sound Design for Video Games with Thinkspace Education, University of Chichester. His role is largely associated with investigating novel programming possibilities of Line 6 Helix Native guitar processing system, and how it can be utilised to expand the creative applications of the GASP project.
Game Audio: Data & Context for Richer Narratives in Soundtracks
10:00 - 10:50
Xan Williams and Dominic Vega
Online

“Tracklaying” (triggering) individual sounds is accomplished using game dev and middleware tools in similar ways across many different types of games. But how are sounds choreographed to form a soundtrack in inherently chaotic, open world, sandbox games?
At Avalanche Studios, we've spent a number of years learning some right (and wrong) ways to take information from the game and use it to define meaningful moments.
We’ll show examples from past projects where using a sound designer’s idea of narrative context and the right game data helped us orchestrate better soundtracks.

Xan Williams

Xan Williams is a Senior Sound Designer at Avalanche, who has designed, and implemented some of their more complex audio mechanics. He has spent time scripting and utilizing game and system data, to derive audio contexts for sound designers. The audio team at Avalanche Studios collaborates in developing open world action games of varying size and complexity, with a team of multi-disciplinary audio experts.

Dominic Vega

Dominic Vega is a Lead Sound Designer at Avalanche Studios, whose work has focused on the craft of using interactive mixing and sound orchestration, to communicate narrative intention to players. The audio team at Avalanche Studios collaborates in developing open world action games of varying size and complexity, with a team of multi-disciplinary audio experts.
Creative Coding in Rust: Building a Generative Music App With Nannou
11:20 - 12:10
Zsolt Török
In-person

Over the last few years, Rust has made leaps and bounds in establishing itself as a viable alternative to C++ for real-time audiovisual applications. [Nannou](https://nannou.cc) is a creative coding framework which aims to provide a beginner-friendly, batteries-included experience for coders and artists alike, and enable them to take advantage of the performance, expressive power, and safety guarantees of Rust.

Creative coding is the act of writing computer programs in order to create something expressive rather than something purely functional: works of art, design, architecture, or even fashion. It includes creating or manipulating images, interfacing with sensors and motors, generating musical compositions, controlling lights and lasers, and creating long-running interactive art installations, just to name a few. Nannou provides the tools to accomplish all of the above, and more.

In this talk I will give a high-level introduction to Nannou, explain the anatomy of a typical Nannou application, and walk through the process of building a simple generative music application from scratch.

Zsolt Török

Zsolt is a software engineer and musician, with a keen interest in the areas where these two domains overlap. In the past he has worked on the royalties and reporting systems at SoundCloud, and led development teams at Native Instruments. He is currently focusing on using the Rust programming language for creative coding projects, and exploring the possibilities of generative music.
Sponsored Talk
12:20 - 12:50
TBD
In-person
Optimizing Pulsar Train Synthesis for Live Performance
14:00 - 14:50
Ryan (Artie) Devens
Online

This presentation demonstrates the innovations necessary for an ideal use of pulsar synthesis in live performance. The study is applied towards **Pulsar**, a VST3/AU/Stand-Alone pulsar synthesizer tailored for live performance and user control.

Pulsar synthesis has been implemented in many Electro-Acoustic works, including some by Karlheinz Stockhausen, Iannis Xenakis, Barry Truax, and Curtis Roads. Each composer accomplished pulsar synthesis with varied methods of analog and digital synthesis, but none of these methods have been optimized for live performance.

**Pulsar** balances a stripped down interface with the parameter trajectories essential to pulsar synthesis. **Pulsar** is available as VST3/AU/Stand-Alone, and is readily integrated into Digital Audio Workstations.

Optimizing pulsar synthesis for live performance will not replace the alternative, but instead push the aesthetic of Micro-Sound to develop a new dimension.

Ryan (Artie) Devens

Ryan (Artie) Devens is a software developer and owner of Recluse-Audio. Hailing from the frozen north of Minnesota, Ryan has always been an obsessive recluse. This obsession has culminated in his software and in his compositional output. Ryan’s software prioritizes touchable interfaces with unobscured parameter states, emphasizing the living personality of digital instruments. His music ranges from the Electro-Acoustic to what might be considered Doom Pop, and always incorporates technology. Ryan attended post-graduate studies at Georgia Southern University in pursuit of a M.M. of Music Technology. Here he was humbled by the foundations of interactive media, post modernist aesthetics, and course audio programming. His advisor Dr. John Thompson was a pupil of the legendary Curtis Roads, and thus the studies of Roads and UC Santa Barbara have given Ryan a deep interest in micro sound, granular, and pulsar synthesis. Ryan is dedicated to environmentalism, collectivism, and kindness towards others. He recognizes that these goals are never fully accomplished and demand a lifetime of effort to maintain.
REPL REPL - a New Interface for Algorithmic Music
15:00 - 15:50
Thorsten Sideboard
Online

Bio
Thorsten Sideb0ard is a Scottish/American programmer working within computational art.
While living in London during the 2000s he worked at Last.fm, regularly hosted music events, and ran the record-/netlabels Highpoint Lowlife and 8bitrecs.com. Now living in San Francisco, he has a monthly radio show and runs the annual Algorithmic Art Assembly conference and music festival, working with Gray Area Foundation for the Arts. He is involved with the live coding and algorave communities, working on his own live coding audio REPL “Soundb0ard”, and has an album forthcoming on Broken20.

Abstract:
Soundb0ard is a REPL and live coding language for making algorithmic music.
It contains several synthesizers and effects, plus a live coding environment to manipulate these sound generators.

In this talk Thorsten will cover the evolution of the tool and its architecture, demonstrating the synthesis capabilities and the extent of the custom programming language which can be used to manipulate and control the flow and sound.

Thorsten Sideboard

Celebrity Prosolar Mechanic. California coast: Cats, records, drawings, sunsets, Highpoint Lowlife records, live coding, events and granular synth. highpointlowlife.com
Sponsored Talk
16:20 - 16:50
TBD
In-person

[email protected]
Raw Material Software Limited
5 Technology Park, Colindeep Lane
Colindale, London
United Kingdom
NW9 6BX
Contact Us
Copyright © Raw Material Software Limited
Raw Material Software Limited is a private limited company registered in the UK with company number 03971916
linkedin facebook pinterest youtube rss twitter instagram facebook-blank rss-blank linkedin-blank pinterest youtube twitter instagram