Responsive image

Reactive Audio

Brian Dunn
React has revolutionized the way client side developers think about their applications. It makes heavy use of the principles of functional reactive programming. How can these design concepts aid in the development of live music applications? With React, events effect state, and the library implements side effects that keep the presentation synchronized. This allows for a simplicity and determinism that restores joy to the process of building complex interactions. We will look at how this powerful idea can be applied to real time music synthesis, rendering music as a side effect of state change. Timing is everything when rendering music. We will look at some different approaches to achieving precise timing in a reactive programming model. These approaches include push-based, where changes to state precipitate re-rendering, and pull-based, where state is constantly rendered and changes to state are reflected on the next render. The pros and cons of these approaches will be examined. In terms of the WebAudio API, all of these changes find their way to automating the values of an AudioParam sooner or later. We will discover some potential improvements to this class that could simplify and clarify implementations in this problem domain. This will be a practical approach. There will be examples in ClojureScript and (hopefully) Elm. Live code execution and glorious rhythmic beeping will keep things fascinating.
            
@inproceedings{2016_EA_89,
  abstract = {React has revolutionized the way client side developers think about their applications. It makes heavy use of the principles of functional reactive programming. How can these design concepts aid in the development of live music applications? With React, events effect state, and the library implements side effects that keep the presentation synchronized. This allows for a simplicity and determinism that restores joy to the process of building complex interactions. We will look at how this powerful idea can be applied to real time music synthesis, rendering music as a side effect of state change. Timing is everything when rendering music. We will look at some different approaches to achieving precise timing in a reactive programming model. These approaches include push-based, where changes to state precipitate re-rendering, and pull-based, where state is constantly rendered and changes to state are reflected on the next render. The pros and cons of these approaches will be examined. In terms of the WebAudio API, all of these changes find their way to automating the values of an AudioParam sooner or later. We will discover some potential improvements to this class that could simplify and clarify implementations in this problem domain. This will be a practical approach. There will be examples in ClojureScript and (hopefully) Elm. Live code execution and glorious rhythmic beeping will keep things fascinating.},
  address = {Atlanta, GA, USA},
  author = {Dunn, Brian},
  booktitle = {Proceedings of the International Web Audio Conference},
  editor = {Freeman, Jason and Lerch, Alexander and Paradis, Matthew},
  month = {April},
  pages = {},
  publisher = {Georgia Tech},
  series = {WAC '16},
  title = {Reactive Audio},
  year = {2016},
  ISSN = {2663-5844}
}