Interview with Phil Burk
Phil Burk is the designer of many of the digital audio tools you might be using right now. We present an in-depth interview with him here.
by Warren Burt, March 2016
Phil Burk is one of the driving forces behind digital audio and advanced music applications these days. After many years of other projects, he’s now working for Google, developing audio and MIDI standards for the Android platform. As he points out, this has the potential to (literally) be a game changer in the world of portable computer music.
SoundBytes: First of all, a brief bio for our readers. What’s your background in technology, in music?
Phil Burk: I started building simple radio circuits in 6th grade. My first programming experience was on our high school’s one computer. It was basically a programmable calculator. I was a bit of a nerd so I hung out in the library calculating rotational energy quanta for small molecules for reasons that now escape me.
After UC Berkeley I wanted a synthesizer so I could make weird sounds. But I did not have any money. So I started building synth modules out of op-amps and surplus parts. Around 1979, I built one synth in a shoe box with spring connectors. It was built using LM3900 op amps. Two VCOs, sample/hold, mixer, amp, pulse generator for about $5 in parts.
I then built a Z80 kit and started doing machine language programming. I experimented with lots of circuits, variable rate digital oscillators, phase locked loops, guitar synths – great fun.
Things got serious when I started working with the folks at the Mills College Center for Contemporary Music. I met Larry Polansky and David Rosenboom. They had a grant to develop an experimental music language called HMSL. This was a toolbox for manipulating multi-dimensional shapes and organizing intelligent objects into a musical hierarchy. I wrote a Forth compiler in assembly and then we built HMSL on top of that. It ran on Mac and Amiga. We added support for the Motorola DSP 56000 so we could do real-time synthesis. This helped greatly with my long term goal of making even more weird sounds.
In 1992 I left academia and started working in Silicon Valley at a startup called 3DO. We developed the first software synthesis based audio system on a game console.
After 3DO I work on various projects including JSyn, a Java synthesis API, and a MIDI ringtone synth. I also did contracting for Sony PS3.
SB: Where did you grow up?
PB: I grew up mostly in the San Francisco Bay Area – Fremont and Berkeley. I spent one year in London when I was 12.
SB: Who are some of the people you’ve worked with?
PB: As far as playing music, my closest collaborator is Todd Telford. He and I have been improvising together on guitar and electronics for 40 years. Egad, that’s a long time.
Then I worked very closely with Larry Polansky, an amazing composer, theoretician, and programmer who introduced me to the world of computer music. While I was at Mills College, I worked with Tom Erbe who wrote SoundHack, and folks from the pioneering Hub network band, Scot Gresham-Lancaster, Chris Brown, and John Bischoff who were all on faculty. I also met Robert Marsanyi, a very creative composer and programmer. I worked with Robert on several projects over the years. I also wrote some pieces and performed with Jeanne Parson. Jeanne later introduced me to the 3DO folks.
Through HMSL I met Nick Didkovsky. He came out to California and slept on my floor so we could hack a Lisp compiler in Forth and write some music pieces. Nick has an amazing band called Doctor Nerve and used HMSL to compose pieces for them to play. Nick and I also worked on JSyn and JMSL together.
I wrote the software for a couple pieces for Phil Corner, a member of the New York Fluxus movement. [Ed: An art movement which started in New York in the 60s which emphasized performance, installation, instructional notation, the use of everyday activities in art work, etc. It’s still alive and well.]
Don Buchla had close ties to Mills College and I ended up writing some software for him.
When I went to 3DO I worked with RJ Mical, Dale Luck and other folks who were on the Amiga OS team. I learned a lot from them about OS design and software architecture.
I met Ross Bencina online.[Ed: Ross is the developer of AudioMulch, interactive music software for Mac and PC.] He and I were both working on a host independent audio API. So we joined forces and created PortAudio.
After 3DO I met Max Neuhaus, who was also part of the New York Fluxus scene. Max was a percussionist and sound artist who was also the first person to connect the phone system to the radio. He did it to create a collaborative sound piece but ended up inventing call-in radio. Max recruited me to work with him on an online voice driven sound piece called Auracle. We then worked together on several sound installations. He designed the sound and provided the artistic concept. I wrote the software and built the hardware. These systems are installed at Dia Beacon Art Gallery in New York, The Menil Museum in Houston, a square in Stommeln Germany and a site in Kassel Germany.
SB: What brought you to Google (besides a number 40 or 120 bus (grin))?
PB: I had worked at home for 15 years and needed to get back out in the world. I was using a lot of Google products, including Search, Android, App Engine, Docs, etc and really liked what they were doing. So when Google called to recruit me I took the bait. I dreaded the 4 hour bus commute each day. But it turned out to be a good decision. And the bus has wifi, which I am using now to write this email.
Google is a pretty mind boggling place to work. They have lots of big ideas and the resources to pursue them. And it’s fun working on products that get used by so many people.
SB: Describe the Android Audio project for us.
PB: We are working on a project called “Android Pro Audio”. Basically we are trying to improve Android as a platform for developing music performance apps.
SB: Why is this necessary?
PB: Some developers were reluctant to develop music performance apps for Android. They wanted three things: low latency, MIDI support and USB audio support. So we decided to focus on those areas.
The most important issue was the audio latency. Latency is the time between a user input, like pressing a key on a keyboard, to the resulting sound output. This needs to be very low for an instrument to feel responsive. If you can notice the latency then it is way too high. Android latency used to be in the range of 100-300 milliseconds. That is fine for playing videos or MP3s. But higher than you would want for a music performance app.
Glenn Kasten has been leading the effort to lower that latency. This is a system wide project that involves working with manufacturers who write the device drivers, and working internally on reducing buffer sizes and managing CPU performance. We can now achieve latencies under 20 milliseconds on some devices. The goal now is to get those better numbers out to as many devices as possible, and to drive the numbers even lower.
SB: What platforms will it work on?
PB: The latency has been getting lower and lower with each OS release. USB audio was added to the Lollipop release by Paul Mclean. Mike Lockwood and I added MIDI support to Marshmallow.
Late breaking news: The N preview release of the Android OS has some new methods that allow apps to cut their Java audio latency by 50-100 milliseconds. Look for AudioAttributes.FLAG_LOW_LATENCY, AudioTrack.setBufferSizeInFrames() and AudioTrack.getUnderrunCount(). Details will be added to the Android docs. This information was not public when we did the interview but now it is OK to talk about.
SB: How will it incorporate MIDI?
PB: We support USB-MIDI so you can plug a MIDI keyboard into an Android phone or tablet and play it as a synthesizer. We also support the new Bluetooth LE MIDI standard for wireless connections. Expect to see more BLE-MIDI controllers coming out over the next year. And we support MIDI message passing between applications. So developers can write a synthesizer that receives MIDI messages from a composing app written by someone else.
SB: Am I correct in assuming that this will apply to all Android devices, and that one will be able to use MIDI like this on both an Android phone as well as a tablet?
PB: Android MIDI should work on most Android devices that are running Marshmallow (M) or later versions of Android. This includes phones, tablets, and even Android TVs. Android MIDI would normally be used to plug a MIDI controller into a phone. But Android devices can also be used as a multi-touch MIDI controller and plugged into a laptop. The manufacturer needs to enable some USB features in order for Android to work as a controller. But we expect most manufacturers will do that.
M is now on Nexus devices and will be available on many other phones in the near future. Older phones may not be able to run on M. There are too many phones to say which ones will or will not run M.
SB: Can we expect an Android app similar to Audiobus where we can interconnect various sound making and processing apps, along with MIDI information flowing between them?
PB: Android MIDI allows you to send MIDI data between apps. But we don’t currently have any way to send audio data between apps. Lots of people have asked for that. So we are trying to figure out the best way to do that securely and with low latency.
SB: And will there eventually be apps like MusicIO and StudioMux, both of which allow audio and MIDI flow along USB between Android devices and laptops, or even (heavens!) between Android and iOS devices?
PB: On Marshmallow, you can send MIDI between Android devices or between Android and laptops. Sending audio between devices using USB is possible but not well supported on all devices.
SB: If I have an old Android 2 tablet, is there any way I can upgrade it to have Android Audio (and MIDI) working on it?
PB: This is up to the manufacturer. It is hard to back port new operating systems onto old hardware. So you typically only get OS upgrades for 2-3 years after a product is released.
It will take a while for these changes to become common on Android devices. But the sooner we start the sooner that will happen. I would love to someday see a billion affordable Android devices capable of running killer music apps.
SB: About your other projects – tell us a little bit about Jsyn. Is it still available, and in what form?
PB: JSyn is a synthesizer toolkit for Java. It is distributed as a JAR file.
If you are writing a Java program, then you can use JSyn to create modules like oscillators, filters, and envelopes. You can connect them together to make big complex patches and control them very precisely. JSyn has over a hundred different kinds of modules, or “unit generators”, including granular synthesizers, wave-shapers, various ramps, sample players, several noise sources, Moog style filters, etc.
JSyn has been in continuous development since 1997. When it started, the synthesis engine was in ‘C’ with a thin Java layer on top. But it was a nightmare to maintain the native ‘C’ code on Linux, Mac and Windows and for multiple browser plugins. So in 2010 I updated the API and converted JSyn to pure Java. Now it is much more portable and easy to maintain. It runs about 80% as fast as the ‘C’ code. But I am about 10 times more productive now that JSyn is in pure Java.
I released the source code on GitHub. JSyn can be used freely under the Apache Open Source license. The JSyn source code is available at: https://github.com/philburk/jsyn. More information about JSyn, including documentation, is at: http://www.softsynth.com/jsyn/. Pre-compiled JSyn JAR files are at http://www.softsynth.com/jsyn/developers/download.php.
SB: And along those lines, you were instrumental in programming the HMSL algorithmic music language, one of the first, and most unique. Is there any chance we can see HMSL revived in some form on the Android platform?
PB: We are reviving HMSL to run on modern Mac and Windows. People will be able to experience 1980’s style experimental music using the original software. But HMSL is very text oriented. It is an interactive programming environment that is good for live coding. So I don’t think it would adapt well for phones and tablets. There are no plans to port it to Android.
Nick Didkovsky is working on JMSL, which is a Java toolkit based on the ideas of HMSL. It supports multi-dimensional abstract music shapes, hierarchical composing objects, and algorithmic composition tools. Maybe we can talk Nick into porting it to Android.
SB: Tell us a bit about your own music – I’m specifically curious about how you made Glass Hand Duet #2, and SubDiv C30 T10, both of which are on your Sound Cloud site: https://soundcloud.com/phil-burk
PB: Glass Hand Duet #2 is a live recording made by Nick Didkovsky and I back in 1999. Nick was in New York and I was in California. We were both running a JSyn Applet in a web browser. The Applets were communicating through a TransJam server that I wrote. The server is kind of like a multiplayer game server, but very general purpose. John Bischoff came up with the idea for the piece and created the raw sounds. I wrote the JSyn Applet. The sounds are generated by “scratching” over a sample at a high rate using an oscillator.
SubDiv C30 T10 was composed by Phil Corner in 1987. It involves subdividing a measure into N equal duration notes, where N can be an integer between 1 and 20. The note pitch is higher when N is higher. Several values for N are chosen and played together on a Yamaha FB-01 using an HMSL program that I wrote. This creates interesting symmetric poly-rhythms. The C30 refers to cassette #30. I have about 60 old cassette tapes that I am digitizing.
SB: Well, we’ll look forward to hearing more of them. Thanks very much, Phil Burk.
PB: You’re welcome. A pleasure.