Music for Tablets – Interview with Michael Gogins

 

by Warren Burt, Mar. 2017

 

Michael Gogins is a composer and computer programmer from New York City, who has been around the computer-music community for several decades.  He maintains the program Csound on both the PC and Android platforms.  Csound is a computer language which allows you to synthesize just about any kind of waveform, with any kind of structure imaginable.  It is a direct descendant of the very first computer languages for music, developed by Max Matthews at Bell Labs in the late 1950s.  In 1985, Barry Vercoe, at MIT, developed Csound on the basis of Music V, the then current state of the art computer synthesis language. Since then, a large international community has developed hundreds of functions in Csound, an open-source project.  Michael is one of the gurus of Csound, and is the person responsible for it on the PC and Android (Android 5.0 and later) platforms.  He was recently in Melbourne, Australia, and we caught up with him (at the Copperwood Restaurant in Carlton) and the following interview resulted.

SoundBytes/Warren Burt: Michael Gogins, you’re a computer music composer and a computer programmer as well, who is based in New York City, and you maintain the Csound for Android program/platform.  Can you tell us a little bit about your early life – your musical inspirations, etc?  Where did you study computer music, and when, and with whom?

Michael Gogins: Actually, I maintain the WIndows release of Csound as well as the Csound for Android app, and have contributed a number of opcodes and other facilities to Csound over the years. And I have written an algorithmic composition system that has changed form several times over the years to write my own music. Originally it was named Silence and was in Java, then it was CsoundAC and was in C++ with Lua, Java, and Python interfaces, and now it is in JavaScript and is named Silencio.

My musical aspirations arose from my family. My father was an inventor and my mother was an artist. They took me to Utah Symphony Orchestra concerts when I was a child. I was also influenced by friends of my family, including Eugene Foster, who was principal flutist in the Utah Symphony Orchestra and for a time rented our basement as a teaching studio. When I heard Gene warming up with his gold flute, it was so gorgeous, I said, “I want to do that.” Now I wish he had been a pianist, but you can’t have everything.

I played flute on and off for years. I was a music major at the University of Utah but I dropped out for various reasons. I lived in Los Angeles for about five years, and played street music and occasional gigs there. Then I moved back to Salt Lake. Starting in 1978 my girlfriend was a composer, Esther Sugai, and that had something to do with it as well. She was a composition student of Vladimir Ussachevsky at the University of Utah. He was repeatedly a visiting professor there. After Esther got her Master’s Degree, we moved to New York, where she started a composer’s collective and worked in the Columbia-Princeton Electronic Music Studio. All of this interested me very much, and I learned a lot about composing by just listening to Vladimir and Esther talk. We moved to Seattle after a year in New York, where I went back to college at the age of 33 at the University of Washington, and obtained a BA in Comparative Religion, with a focus on Eastern religions.  While I was at the University of Washington, John Rahn, editor of Perspectives of New Music and author of Basic Atonal Theory, put up a flyer in the halls advertising a seminar in computer music. I took the seminar twice. There was not even a functional digital to analog converter at the University until the very end of my second seminar, when I finally heard 40 seconds of music created by a program I wrote, which translated each time step of a one-dimensional rule 110 cellular automaton into a stack of sine waves. This program was written in Fortran and submitted as a batch [old school word suggesting “submitted to be executed via a deck of punched cards” – Ed].

The seminars, which were really great, were my only formal study of computer music. Esther and I got married but then a few years later we got divorced. I moved to New York intending to study theology, but I didn’t. Instead, I heard about Brad Garton and his computer music studio at Columbia, the same place Esther had worked. Brad welcomed non-students into the “woof users group,” and I was able to use the Sun and NeXT workstations at Columbia to make more computer music, and had pieces performed about twice a year for a few years in the “woof concerts”, often in the old Merce Cunningham Dance Studio on top of Westbeth in the Chelsea neigtborhood of Manhattan. The woof meetings were certainly an intensive informal education in computer music because I was exposed to the work of graduate and postgraduate computer musicians at Columbia, and to the work of Brad himself, his teacher Paul Lansky, and other important computer musicians.

SB: When did you start making music with computers?  What kinds of computers were they, and how did you work with them?

MG: I got started making music with computers when I saw Martin Gardner’s Mathematical Recreations column in Scientific American, “White Brown and Fractal Music,” in 1978. It featured a picture of a Peano snowflake curve. The instant I saw this picture, I thought, you could punch the horizontal lines in this picture into a piano roll have some sort of interesting music. I also thought you would best do this kind of thing on a computer, and that if I was ever going to be a composer, this would be the best way for me to do it. I didn’t actually have a computer or access to a computer at that time, but that’s really how I got started. The first computer I had access to was an Apple IIc computer at a public electronic music studio in Seattle in 1983. This was when I was taking my first seminar in computer music with John Rahn. I was able to program the computer in Apple Basic to compute a cellular automaton and to play each state of the automaton as a big MIDI arpeggio. Then there was the CDC mainframe that we used for the seminar. After that, when I moved to New York by myself, one of the first things I did was go out and buy an Atari ST computer because it had a built-in MIDI interface and a BASIC compiler. I started writing compositional algorithms in BASIC to drive a Casio CZ-101 synthesizer. Then I bought a Toshiba PC-compatible laptop and programmed it in QuickBasic, Turbo Pascal, and Borland C and C++. I was teaching myself all of these languages. My approach was always to take some irreducible generator of form and map it onto a piano roll type score. This is still the basic approach I follow today.

SB: You make music by writing programs and having the computer synthesize the sound, and sometimes vision, based on your program.  This is different than using commercial music software and hardware. What, for you, are the advantages of doing programmed computer music?

MG: The advantages follow from the fact that I do art music, not commercial music. By the way, I do love and listen to a lot of pop music, this is not meant as a putdown. But my basic esthetic is somewhere in the line from Bach through Beethoven to Morton Feldman, Conlon Nancarrow, or Iannis Xenakis. But because I do art music, it is critically important that I not be limited by imitation. The commercial software has for me two problems. The most serious is that it makes it easier to create commercial music and harder to create art music. There is a groove for 12-tone equal temperament, standard tempos and meters, canned sounds, and so on; other things are possible, but harder. But there is another almost equally serious problem, the commercial software is closed source. If the manufacturer changes a file format and one of my old pieces breaks and can’t be re-edited now, there is nothing I can do about it. I have in fact had to go back to pieces 5 or 10 years old and rework them. The software I actually use is Csound, an open source sound processing language, along with software that I write myself. I understand this code, I have written some of it myself and maintained more of it, so I can fix it if I need to, and I have needed to do that several times over the years.

Another big advantage of doing programmed music is that the computer music community is more collaborative than competitive. That means I can borrow and adapt other people’s code without having to think I am a thief. I have benefited greatly from using a large number of sophisticated Csound patches from many different composers, notably John ffitch [how Mr. ffitch spells his name; not a typo – Ed], Steven Yi, Iain McCurdy, Hans Mikkelson, and others.

SB: Nearly all your work is algorithmic in nature.  Can you tell us what that is, and why you found that attractive?

MG: An algorithm is a definite, step by step procedure. Any such procedure is an algorithm. A spaghetti recipe is an algorithm. The method for doing long division is an algorithm.

With some algorithms, such as the algorithm for adding numbers in the base 10, you can pretty much do them in your head and tell what the answer will be in advance. With other, quite simple algorithms, the output quickly becomes quite unpredictable and you simply can’t do it in your head. An example is the logistic map, iterating x <= c * x *(1 – x) becomes completely unpredictable as the value of c approaches 4 and the number of iterations increases.  The technical name for this is “computational irreducibility.” I think this is what makes algorithmic composition interesting, what makes it work. I can set up a procedure that I am confident can sometimes produce interesting music, but I can’t predict the results in detail. This is somewhat related to the approach of John Cage, or even the minimalists.

Therefore, because there are so very many possibilities, it can’t possibly be derivative. Then I tune and tweak the parameters, the details of the algorithms, the sounds used to render the notes, and so on, and small changes make big changes and can make big improvements.

So in the end, I feel that computational irreducibility can be used to magnify my musical imagination.

My music is algorithmic because that is what comes naturally to me. I made up original polyphonic music in my head when I was in middle school, and I can improvise on the flute and play by ear, and I can read music for the flute (not so much the piano, not so much scores).  But I feel I would be quite derivative and mediocre if I tried to get better in these ways, except possibly for improvising. But with algorithms I feel I can tell what algorithms will turn out to be musically interesting. Best of all, because I can hear in advance what the algorithms will do only in a rather general sense, as I struggle to refine and tweak the algorithm, I can achieve something reasonably well-formed that is still not imitative or derivative. Another way of saying it is that my music is algorithmic because that is the kind of music I want to hear.

SB:  Going back to what you said about irreducibility and imitation in art music, can you expand a bit on that.  Most people (huge assumption here!) use computers, I think, to make music that sounds like something they already know and like; and of course, composers like Stravinsky, or various post-modernists, would be involved in music that while not quite imitation, is definitely referring to earlier models.  But you’re searching for something different than that, and for you, that difference seems to comprise the essence of what makes art music art music.  Can you comment on that?

MG: Actually, I am trying to do both. I want my music to be easy to hear, to understand, and therefore I need it to be based on traditional notions of chord progression and voice leading. In other words, I feel that the basic principles of voice-leading, already worked out completely in the Baroque period, are simply an algorithm for ensuring that multiple voices of music can easily be heard, and I think that these principles have permanent validity for any polyphonic, note-based music.At the same time, just like Schoenberg felt tonality was exhausted and wanted to create a whole new formal principle for organizing music, I want to use computational irreducibility and other aspects of algorithmic composition to break the hold of the past on my imagination. There are a lot more ways of stringing voice-leading operations together than canon or developing variation.  I’m not sure I have succeeded, but I am sure I am on a good path, a path that needs to be followed. And I am certain I can push this approach much further than I have. I got a late start with this stuff and I am trying to push it as far as I can while I have time.

The image above is a picture of triads in chord space for permutational equivalence and octave equivalence, what musicians call simply “triads.” This kind of thinking is based on the work of Dmitri Tymoczko.

SB: For readers unfamiliar with Csound, can you give a very quick introduction to it?

MG: Csound is a Unix command for writing soundfiles or audio streams based on a programming language. Actually there is one language for specifying scores, and another language for specifying orchestras. Csound is an old program, first written in 1985, that has been vastly expanded and ported to almost all operating systems, and to Web browsers. Csound is occasionally taught in music technology courses and composition schools, although it is more common to teach Max/MSP now because there is a bigger demand for Max/MSP in music production.However, though I am perhaps biased, to the best of my knowledge Csound is the most powerful of this family of programs that includes Csound, RTcmix, SuperCollider, Pure Data, Max, ChuCK, and others. Csound is completely open source, it runs on all platforms, you canwrite plugin opcodes for it, and it has a superior set of time/frequency analysis/resynthesis tools called the Phase Vocoder Streaming Opcodes, by Victor Lazzarini.Another advantage of Csound is the huge collection of pieces and instrument designs, going back 30 years and still coming out, that are available for re-using and adapting. There’s a real collaborative, community-based scene with Csound.

The image above is a picture of the sonogram for Gogin’s piece Sound Fractals, using a linear display of time on X and a logarithmic display of frequency on Y. The fractals were generated in code and directly translated into sound using granular synthesis.

Although it is easier to get started with Max, which is a visual patching language, most composers find that it is easier to learn how to write Csound code if you want to do a big project or re-use code in different projects. Here is an example of something that sounds a bit like a MiniMoog synthesizer:

 

SB:  You were involved in porting CSound to the Android platform, and nowmaintain that free app (in the Google Play Store).  What are your thoughts about having a program like CSound, formerly only available on mainframe computers (a looong timeago, I know), now available on tablets and even phones?

MG: I think it’s really great. I also think most people have no idea how much power is packed into the thing and how long it will take to learn even a fraction of what it can do. I myself have only learned part of what Csound can do.For me the advantage of Csound for Android is just that it makes doing computer music more like just carrying around a notebook of staff paper. You can carry it around all the time, you can run it anywhere, and you can work on the same pieces on a workstation or on a tablet or even a phone. You can work on a plane. You can work in bed. When I had my old day job as a computer programmer, I used Dropbox on my notebook and my tablet, and at work, I would go to the New York Public Library at lunch and spend 20 or 40 minutes working on pieces with noise-cancelling headphones in the wonderful main reading room.  Being able to do this significantly increased the time I could spend on the pieces and the eventual quality of the pieces, several of which have been performed in festivals and are available online.

SB: Would you like to conclude your thoughts on CSound on tablets and phones with something upbeat and optimistic about the future?

MG: I have nothing but optimism regarding the power of the hardware and software.Guarded optimism on our ability to master the stunning potential of the instrument. I do wonder how long it will take to rebuild an intellectual property infrastructure that will protect (a) the freedom of the audience, (b) fair payments to the artists, and (c) opportunities for beginning artists. Right now it just isn’t working. But it is better than it was. I also wonder if the crazy fragmentation of languages, operating systems, computing platforms, and so on will ever cohere into a set of standards that everyone can learn together and that will provide a foundation for a future tradition as the piano keyboard and music notation have done. But the growing importance of Web standards convinces me it will happen also with music software, so that’s a real point of optimism.

To hear some of Michael Gogins’ music, go to:

https://www.youtube.com/user/michaelgogins

The image right incorporates a fractal generated by Michael Gogins and used as a score and as the cover for this album.

 

 

 

 

 

 

 

 

 

You may also be interested in

SoundBytes mailing list

Browse SB articles
SoundBytes

Welcome to SoundBytes Magazine, a free online magazine devoted to the subject of computer sound and music production.

 

If you share these interests, you’ve come to the right place for gear reviews, developer interviews, tips and techniques and other music related articles. But first and foremost, SoundBytes is about “gear” in the form of music and audio processing software. .

 

We hope you'll enjoy reading what you find here and visit this site on a regular basis.


Hit Counter provided by technology news