Interview with Jerry Gerber – Composer and Master of Sequencing

Jerry Gerber is an accomplished composer and a master of MIDI sequencing. Jerry shares his philosophy of music and composition in this insightful interview.

by David Baer, May 2014

The rich array of high-end orchestral sample libraries available for purchase is pretty clear proof that there’s a sizeable population of composer/arrangers who are buying them.  These are not inexpensive to produce, so clearly there are significant numbers of people who are buying them.  But how many of these people are widely known for their work?  It would seem that most of them toil in the shadow, producing essentially anonymous film scores to feed the immense media production industry of our current time.

Not so Jerry Gerber.  He composes music to be heard on its own terms, just like composers of the pre-computer, pre-mass-media era.  But at the same time he embraces all that our modern technology has to offer in pursuit of his musical goals, mounting performances of his work via sample library and synth sequencing.  In addition to being a fine composer, he is unquestionably a master of MIDI sequencing.  If you listen to any random sampling of his music, you will probably agree that he succeeds in this admirably – that is if you don’t mistake his recordings for fully symphonic performances played by live musicians to begin with.

Jerry generously made himself available to talk to Soundbytes Magazine in the interview that follows.

 

SoundBytes:  Jerry, thank you for taking the time to talk with us.  Let’s start with your musical background.  Please tell us about your education and influences.

Jerry Gerber:   I started my musical training with accordion around my 10th year.  I wrote my first composition around my 11th year.  I studied accordion for several years, I then switched to guitar and began studying jazz/rock guitar with an excellent teacher, Eddie Arkin, well-known in L.A. studio music circles.   By the time I was 14, I was playing in bands and in clubs in L.A. and did that until my mid-20s.  I came back to the keyboard at the age of 18, only this time piano.  This is also when I began to study music theory more seriously.  My jazz and classical music theory studies continued through my 20s, studying piano and theory with various musicians and composers in Los Angeles, and I earned my Bachelor of Music in classical music theory and composition from SF State University in 1982.  My musical influences are many, including Western classical and American Jazz, Northern Indian classical music, rock n’ roll, folk and folk-rock from the 60s, world music and contemporary electronic music. 

 My non-musical influences include Erich Fromm, Ralph Waldo Emerson, Hermann Hesse, P.D. Ouspensky, Carl Sagan, the Urantia Book, the Pathwork Lectures and four decades of practicing meditation.  These are some of the ideas, people and writings that have been an inspiring and positive influence on my life and my music.

 SB:  So how did you then gravitate toward computer composition/performance?

 JG:  I started playing around with tape recorders at around the age of eleven.  Playing electric guitar in bands for a decade or so got me familiar with recording studio technology in the 1960s and 70s.   As soon as I graduated from college, I got my first synthesizer, a Yamaha CS-70, a year before MIDI was introduced.   When MIDI appeared on the scene, in 1983, I got my first digital synthesizer, the Yamaha DX-7.  When I heard Gary Leuenberger’s MIDI programming of a Bach Brandenburg Concerto (1st movement of  #5 in D Major) around 1984 I was astonished when listening to what computers and digital synthesis were capable of, even in the mid-1980s.  Gary used eight DX-7s in a rack (TX-816) to get his interpretation of Bach, a few years later I too was using the TX-816 rack while scoring the popular Claymation animated TV series, The Adventures of Gumby

 Though I spent over twelve years doing mostly commercial work for hire, I continued to write my own compositions.  I began to feel that the electronic music studio–MIDI, synthesizers and sample libraries—was the medium I strongly desired to work in, not just professionally, but artistically as well.  So around 1994 I began writing orchestral music for this medium and by the early 21st century, I knew that advancing music and audio technology were giving musicians a powerful new set of tools and instruments that were becoming increasingly more musical.

My first sampler, the Roland S-50, circa 1988, allowed about ½ megabyte of memory and 12-bit resolution.  The re-creation of, say, a violin was quite mediocre, it’s just not possible to achieve a high-quality sampled musical instrument by using only a few samples and 12-bit resolution.  My current library, the Vienna Symphonic Library Orchestral Cube, has a solo violin sample-set; this instrument alone consists of over 46,000 24-bit samples!  The entire library consists of over 760,000 samples.  This translates into a capability for musical phrase-shaping, nuance, expression, detail, variation of tone—musical values that exist in both acoustic and electronic music.  I also make use of software synthesizers, including Massive, Tera, FM8, Rapture and Z3TA-2.  I am grateful I was born at a time where I could experience digital music technology in its infancy and watch it progress toward a more mature musical medium.

SB:  Let’s focus on some of your music, starting with a very recent piece called Raga, just completed in 2013.

   Raga

Editor’s note: This clip, as well as all the others included here can be heard in full by visiting the web page for which the link is provided at the end of the interview.

This is about as an adept and expert example of MIDI sequencing I think I’ve ever heard.  Please tell us a bit about the piece and how it was realized. 

JG:  For my 13th album project, I wanted to start with a short piece exploring software synthesis.  I also wanted to write a piece that reflected non-Western musical influences, which is almost impossible given that almost my entire formal and informal music education is in Western classical, jazz, pop and folk music.  Though Raga has little in common with an Indian raga, there’s a basic feel to it (the ostinato figures played by the softsynth) that reminds me, if only in spirit, of a raga.  The ostinato drone slowly moving from tone to tone produces a solid, dependable point of tonal repose, the rhythmic liveliness creates energy and intricacy, the melodic and harmonic design, I hope, is pleasing to the ear, heart and mind.  The piece is scored for two instances of the Z3TA-2 softsynth using two factory patches and VSL chamber strings.  I often program my own synth timbres but in this case I found two pre-programmed timbres that were right for this piece.

The first synth patch “Seq Arp FL” is a delightful ostinato, synchronized to tempo, that makes use of an arpeggiator and filters modulated with envelope generators and LFO in such a way that the ostinato morphs its harmonic content over time, creating an interesting rhythmic pattern that fulfills both a sense of repetition and variety.  The second synth patch “Richie’s Pad” is a beautiful pad with a rich, lush sound.  When I first sequenced the 1st violin part, I noticed pretty quickly that the “feel” of the groove was not right.  This was easily improved by shifting the first Z3TA-2 track by moving it 24 ticks after the beat.  I also moved the second synth patch 75 ticks before the beat, compensating for its relatively long attack time.  By doing this, the feel of the violin’s rhythm and the ostinato suddenly jived, and I knew I had the groove I wanted.   Sometimes a good groove can be achieved by manipulating attack times, sometimes by shifting a track forward or backward 10 or 20 milliseconds or so, and sometimes just by choosing a different articulation or patch, it’s all about context so there are no absolutes that will work the same in every musical situation.   

I produced Raga in Sonar X2.  I’m now running X3e and I am very impressed with it.  It has beautiful ergonomics, which is funny because when X1 first came out I hated everything about it and skipped it.  But the people at Cakewalk have really created a winner with Sonar X3, it has much to admire in terms of design, functionality and stability.  I’ve been composing in the same sequencer since 1992.

SB:  Let’s turn to a work from 2010, one of my favorites, called Rhapsody.  In it you invoke a … well I have use the word “groovy” … 60s jazz singing vibe.  Where did you get the vocal samples?  Is this a case of a “found” sound inspiring a composition?

   Rhapsody

JG:  Rhapsody materialized because I decided to compose a piece with no orchestral instruments.  I “scored” it for two instances of FM8 and Tera, a Roland XV-3080 and the vocal samples coming from the XP-30 vocal sample patches (from one of Roland’s cards).  I put “scored” in quotes because this is one of the few pieces I’ve written where I didn’t create a finished score. The vocal samples are obviously from jazz vocal traditions and yes, they inspired and influenced the direction I took the composition.

SB:  Let’s turn to a piece from 2011 called Small Matters.

   Small Matters

Actually this question could be asked of any of the example clips here, but this is as good a time as any.  I think one of the greatest challenges of sequenced music is making it sound other than sequenced.  In producing sequenced music, getting the notes right is simple diligence and making the dynamics sound compelling is not overly challenging.  But getting the performance to seem fluid and other than mechanical is something that eludes many computer music producers.  What’s your solution?

JG:  It’s definitely a challenge to program a MIDI sequencer in such a way that the music flows and has a groove, so that it has that magical communicative energy, or whatever else it’s called; no doubt the opposite of “mechanical”.  Sometimes dynamics are relied upon too heavily in order to achieve a sense of drama and intention, so the approach I’ve taken is to practice deep listening and think about how a skilled player would approach a note, phrase, interval or chord.  My workshop (Beyond the MIDI Mockup), which I presented at NAMM this year and last year in San Francisco, goes into detail about performance-level MIDI interpretation; essentially it’s about first choosing from your library of sounds the most expressive sample-set (articulation) for that note, motive, phrase or passage and then program variations in velocity, envelope, note length and placement relative to the beat.  Multi-layer sampling contributes greatly to what a musician can do to phrase-shape not just using dynamics, but also harmonic content.  Multi-sampling also contributes to the effectiveness of repeated notes.  Also, much naturalness of expression is achieved through making use of a lot of tempo variation.  Changing tempos by even a small amount can really help create a natural sense of phrase. 

Editor’s Note: Images of a typical SONAR Tempo View and Event List from one of Jerry’s composition follows.

What a player does physically, intuitively and spontaneously a musician programming a sequencer has to do conceptually.   What gives a piece of music expression, intention and revelation?  Organized sound and the perception of organized sound form a complex interaction more or less meaningful to each person’s unique way of organizing how they perceive the world.  Not everyone will enjoy sequenced music; someone might feel something vital and important to their listening experience is missing.  I like to think nothing is missing if the producer does their job well and if there’s no philosophical or emotional objection to making music this way (on the part of the listener), but even then, some will resonate with it and some will not, as is the case with all styles and genres of music in the acoustic realm of music making, and for that matter, in all art and entertainment.

I like to think of a computer-based recording as having performance values.  If an expertly composed and intricate piece of music is played badly or misunderstood by the players, we have a disconnect between the composer’s intention and how the piece actually sounds.   When sequencing, it’s a bit different as I am composing while at the same time programming in such a way to try and achieve both an accurate and honest interpretation of the musical ideas and a performance-level MIDI recording.  Similar to how composers believe that composition and orchestration are inseparable activities, I think of sequencing a composition as involving both invention and interpretation:  composition and sound.  In the classical music world, the division of labor between the creation of music and the interpretation of music is quite extreme, not so in the pop or jazz worlds.  Electronic music allows the musician to be fully involved in both the creation and interpretation of music, which to me is healthy and contributes to the development of a well-rounded musician.

My compositional process is fairly linear, meaning I usually compose a piece from beginning to end, but only in a general sense.  Editing, experimentation, changing or improving earlier passages, sections, notes, chords–usually from an orchestration or production level, but sometimes compositionally too—is where most of the detailed work takes place, and that is definitely not a linear process.  Brahms commented that the most difficult part of composing is “letting the superfluous notes fall under the table”—this is good advice, often it’s what I delete and clarify that makes the composition more effective. 

SB: Related to the issue of fluid/humanized sequences are the difficulties you must face when mixing music performed live with sequenced material.  The following two clips, from 2002 and 2008, incorporate a live performer (violin and soprano respectively) in the mix.  How do you approach the challenge of capturing these performances?

   Violin Concerto, 3rd Movement

  The Peace of Wild Things

JG:  One of the challenges of integrating acoustic instruments and voice with sampled and synthesized instruments is what I call “body”; acoustic instruments and the human voice have a depth and a roundness that only the best sample libraries with hundreds of thousands of well-recorded 24-bit samples have been able to achieve.  The mix between the acoustic instrument and/or voice and the digital ensemble has to be really right on—if the digital ensemble is too loud it can bury the lyrics or the acoustic soloist too deep into the mix, if it is not loud enough the digital ensemble will sound thin and weak compared to the vocal part or the acoustic instrumental part.  Some singers and players adapt easily to recording with a digitally sequenced ensemble.  As your question suggests, I do use a temp track that has the solo part(s) roughly recorded for the players or singers.  I give the musicians a temp recording with their part recorded, along with the written score.  This helps them to feel out the tempo changes so that when they get into the recording booth they have a good sense of how the arrangement works.  

SB:  Let’s have one final musical example, your Symphony #8 from 2012.  You have made the complete score of this available on your web site (link to be found at the end of the interview), and we’ll show a small extract here (not even a full single page in this edited extract).  Tell us about the role of scoring in your composition.  

   Symphony #8, 4th Movement

JG:  Writing music on a notation screen in a sequencer is not much different than writing music on manuscript paper, at least this is my experience. In both cases a composer imagines and hears music inwardly while notating symbols that represent what is being heard. In earlier traditions there’s composing and performance, and usually at least some time lag between the two. In the digital medium there’s composing and the producing of a recording; the chief difference between working in MIDI and notating on paper is that computer-based instruments significantly reduces the amount of time between when a composer writes music and when that music can be heard.

Many years ago I was composing with pencil, staff paper, piano and metronome, and that was it. I could have never in my wildest imagination dreamt of how I am writing and producing music today. I am a keyboardist and I do a lot of improvisation, many of the ideas I use in composition have their beginnings as keyboard improvisation. For me, improvisation is the raw, primal, purely spontaneous aspect of making music—I don’t write down, memorize or structure my improvisations in any way, that all takes place when I am in front of the staff view in my DAW. I have a melody, motive, theme, rhythmic pattern or harmonic progression in my mind, and I begin to sequence it. Sometimes orchestration happens at the same time; sometimes I just get the basic ideas sequenced and refine the orchestration later. I play some parts in on the keyboard, often the percussion parts, and enter notes using the mouse onto the staff.

I hear musicians complain that a computer plays back a sequence too perfectly.  But I disagree.  Often, when a sequenced piece of music sounds mechanical it is actually because the computer’s performance capabilities are revealing weaknesses in the composition.  Perhaps the melody doesn’t really have an effective shape, or the voice-leading is crappy, or the counterpoint too academic and stiff, or the orchestration too pompous and conventional, or there is not enough variation in envelope, velocity, note length or tempo.  Musicians who don’t work in electronic music probably don’t appreciate the depth of technique that is required to produce a piece of music using MIDI that “sings”, and really sounds expressive.  Just as we know breathtaking photographs can be taken with a camera, which is also a mechanical device, great music can be produced with MIDI if the producer understands the strengths and limitations of the medium.  I would never bother comparing a MIDI recording to a live performance, the psychological/social and spiritual energies of people playing live together is a unique and incomparable experience.  But comparing a recording of a live ensemble with a MIDI recording – that’s a little more reasonable.  As a film is not a mock-up for a live play, and a photograph is not a mock-up for a painting, I don’t view MIDI as poor substitute for not having live players.  I view it as an incredibly powerful and versatile musical medium without a lot of tradition behind it, at least not yet.

SB:  You mentioned earlier of your gratitude of being able to take advantage of the benefits available from digital musical technology.  What is most lacking right now?  What capabilities are highest on your wish list?

JG:  I’d like to see Cakewalk improve Sonar’s notation: correct display of 64th notes, tied and dotted triplets should conform to SMN, and with dotted notes the dot belongs above or below the staff line depending on stem direction, it should never be on the staff line as this makes it hard to see.  These long-standing issues don’t seem to get the attention of Sonar’s developers.  If I ever do switch DAWs, this will be why.  None of these issues affect the accuracy and precision of MIDI playback, so it’s not that big of a deal.   Sonar has many other things going for it that are important to me, for example the types of events in the event list can be color-coded, something Digital Performer and Cubase cannot do, and the capacity to have multiple notation windows open at the same time, the only DAW I know of that can do that.

SB:  For people just getting started with music production primarily realized with sequencing, any advice on how to get off on the right foot?

JG:  Start by sequencing some Bach Fugues.  Each note’s parameters contribute to the whole; the idea is to make the whole sound greater than the sum of its parts.  Think about how an expert player would play that phrase and then translate that into how you program in the event list and staff view.   Think about strong beats, weak beats, dynamics, phrasing, small tempo changes, stuff like that.  Sequencing is about 10% note entry and about 90% tweaking and editing.  Don’t be satisfied with a 1st, 2nd or even 3rd draft. If anything bothers your ear, fix it, improve it, change it, do whatever it takes to get the sound you want.

SB:  Jerry, thank you so much for taking the time to share your thoughts with us.  It’s been a great pleasure.

 JG: Thanks David, it’s been a pleasure talking with you.

 

Jerry’s web site:

 http://www.jerrygerber.com/

 The full score of Symphony #8 can be viewed here:

 http://www.jerrygerber.com/cosmicconsciousness.htm

 The full mp3 clips of the examples in the interview can be heard here:

 http://www.jerrygerber.com/soundbytesinterview.htm

 

You may also be interested in

SoundBytes mailing list

Browse SB articles
SoundBytes

Welcome to SoundBytes Magazine, a free online magazine devoted to the subject of computer sound and music production.

 

If you share these interests, you’ve come to the right place for gear reviews, developer interviews, tips and techniques and other music related articles. But first and foremost, SoundBytes is about “gear” in the form of music and audio processing software. .

 

We hope you'll enjoy reading what you find here and visit this site on a regular basis.


Hit Counter provided by technology news