Virtual Orchestra Composition and Production by Guest Author Jerry Gerber

Jerry Gerber is a composer of modern classical compositions, but his orchestra lives on computers in his studio.  Jerry shares some of his secrets of fulfilling computer-based composition in this illuminating tutorial.

 

by SoundBytes Magazine, Jan. 2016

 

Anyone who knows the history of classical music can tell you that a composer who decides to write a ninth symphony is risking demise shortly after completing the work.  The ninth was the last one written by Beethoven, Schubert, Mahler and several more of the great ones.  But that didn’t deter composer Jerry Gerber from thumbing his nose at mortality and undertaking that mission.  And we can all be grateful, given the stunning music therein.

Jerry is a San Francisco based composer who is a master of realizing his compositions using Vienna Symphony Library sampling technology along with a smattering of software synthesizers.  The results are so amazingly “lifelike” that any but the most astute listener would be completely unaware that the recordings were other than the real thing – an actual orchestra recorded in a high-end sound-studio.  Ironically, this is not actually his principal goal, as you will see in what he’s written.  But the results he achieves are striking, no matter what the primary intent was.

We interviewed Jerry in 2014 and that interview can be read here:

http://soundbytesmag.net/jerrygerberinterview/

Given the milestone of completing a ninth symphony, we asked Jerry if he would share some of his secrets on composing directly for a DAW medium, secrets which must surely have been accumulated over the course of constructing performances of nine symphonies, a number of other compositions for full orchestra and some lovely chamber ensemble pieces as well.  He graciously consented to do just that.  But we had to hope he didn’t take too much time to do so … now that ninth symphony is in the bag, who knows how much time he’s got left.   :mrgreen:

The baton is all yours, maestro!  Take it away.

 

My freedom will be so much the greater and more meaningful the more narrowly I limit my field of action and the more I surround myself with obstacles. Whatever diminishes constraint diminishes strength. The more constraints one imposes, the more one frees one’s self of the chains that shackle the spirit.

                                                                                                Igor Stravinsky, Poetics of Music

 

A paradox of the medium I work in is that though I receive sincere praise as to how “realistic” my virtual orchestrations sound, my intent is not, and never has been, to fool listeners into believing that they’re hearing a recording of a live ensemble.  Instead, my aim is to create music and recordings that are expressive, satisfying and artistically effective as compositions and sound, in the medium of computer-based instruments.  My art is a studio art, not a performance art, at least not performance in the traditional sense.  To bring MIDI performance values up to a high level of artistic expression, the composer must understand composition and MIDI programming, and this understanding results from a long and deep commitment to the medium. As always the real work is in the details. The digital orchestra, like any artistic medium, has its strengths and weaknesses, its potential and its limits.  I try to be cognizant of both as I work with new music possibilities. 

My new CD, Virtual Harmonics, is a product of about two and a half year’s work.  This recording contains a new symphony for virtual instruments and four short pieces.   One of the most joyous aspects of composing for its own sake, rather than as an adjunct to film, TV or games, is that the music itself determines the content and form of the work.  This is both liberating and challenging; the piece expresses nothing but itself and the musical values and imagination of the composer.  

Symphony #9 for the Virtual Orchestra is a four-movement, 34-minute work for virtual instruments, including orchestral samples from the Vienna Symphonic Library Orchestral Cube, software synthesizers including Tera, Massive, FM8 and Z3TA, and choir samples from Requiem Pro.  Each movement is designed around a few themes, sub-themes and motives, and the development proceeds from the economical use of these materials. 

The 1st movement begins with double chromatic mediant harmonies in the divisi violins and divisi violas, setting the momentum and tension. Other primary material includes the horns at measure 3, the cellos at measure 10-11 and also at measure 32-33.  There are also counter-motives in most of my symphonic movements.  In this movement there are only orchestral samples, no vocals or synths. 

Structure evolves from content – at least that is my experience.  The form of a piece is suggested and sometimes determined by the ideas themselves, where they want to go and how they get there.   I don’t start off with a pre-existing idea regarding overall structure, at least not consciously; I usually have an approximate length in mind, but even this depends on the ideas themselves, the ideas are guided by subjective taste and aesthetic sensitivity.  Freedom of imagination is the artist’s closest ally.   

Example of Events List from SONAR Project

 

The second movement utilizes three instances of Z3TA-2, Cakewalk’s ingenious software synthesizer.  I often play off arpeggiated rhythms and LFO-modulated timbres in my orchestrations, these dynamic harmonics often give clues as to how the orchestration, rhythm and harmonies should proceed.   The integration of orchestral samples and software synthesis is a natural starting point of exploration in this medium.  Where virtual orchestration and traditional orchestration meet is often in specific ideas about the organization of timbre.

Brief Example of Some Score Notation

 

Here are a few principles of orchestral writing it’s good to be aware of:

Transparency:  This implies that each musical part can be heard and has its own sonic space in which to sound.  The ear rejoices in hearing chords sound together, but also in hearing each line as a thread in the tapestry of the musical texture.  One meaning of transparency can be demonstrated through hearing a complex and dense passage with thick chords, while the linear polyphony is audible and the ear can follow a given instrument.   This is often achieved by eliminating all unnecessary notes and materials.  Brahms said “It is not hard to compose, but it is wonderfully hard to let the superfluous notes fall under the table”.  Often, it is what we omit from the composition that defines its expressive power.  The composer who doesn’t know the value of silence will not come to know the power in the notes.  Transparency allows the music to breath, to allow space and silence to infuse their expressive qualities into the work.

Orchestral Weight:  Orchestral weight is defined by how many instruments are being assigned to a specific musical part.  Other than true polyphonic texture, musical parts are often in a hierarchy of melodic and rhythmic importance.   There are numerous hierarchies in music; intervallic, dynamic, rhythmic and temporal processes are in a constant state of change.  In orchestral writing, the idea of the long-line is crucial because it is through a single melodic thread that the structures and ultimate shape of the piece unfolds.  The ability to sustain the long-line is part of every good symphonist’s technique, whether writing for acoustic instruments or computer-based instruments.  As Copland pointed out, it is one thing to write a successful 3- or 4-minute piece, another thing entirely to craft a much longer work that achieves unity, variety, cohesiveness, and both surprise and inevitability.  The depth of thought and feeling a composer brings to a piece directly influences how techniques will be used, and while technique itself can be learned and practiced, desire, imagination and the will to write music is something a composer finds only within his own psychological, intellectual and emotional resources.   

Orchestral Balance:  Balancing the ensemble means that loud passages are not too loud, soft passages are not too soft, and the transitions between them create the desired effect.  It also means the composer is considering the four basic areas of frequencies, 1) the bass (20-200 Hz) 2) the low mid-range (200-1000 Hz), 3) the high mid-range (1000-5000 Hz) and 4) the high range (5000-20,000 Hz).  The ability to hear the subtle interaction of harmonics, the inner voices of a contrapuntal or homophonic texture and the difference between very slight increases and decreases of volume (1dB and less), is a necessary skill in this kind of work. Bob Katz, the mastering engineer, says that mixing at around 83dB is a very good idea because if we mix too loud we get an artificial sense of how the lowest and highest notes will sound (they’ll be overemphasized) and if we mix at too soft a volume, we may be tempted to bring up the bass and high notes too much, thereby throwing the mix out of balance.  Obviously, there’s a huge subjective component at work, because musical style often sets the bar as to how a mix should proceed.  Another example of balance is about ensuring that the orchestration isn’t cluttered with instruments that are not adding to the desired tonal color of the mix, the overall sonic impression of the music.  The principles we learn in the study of harmony, counterpoint and orchestration must always be considered in context—every musical situation is different, even in the same piece, the musical experience must exist within the flow of time.  Theory may be a starting point in composition, but sooner or later intuition, imagination and the need to experiment drives the ultimate shape of the work. 

In the 3rd movement of the symphony, I programmed choir samples from the Requiem Pro library.  Sometimes I have the voices in front singing the primary line; other times, they are blended into the orchestral texture.   Syllabic variation is achieved via a MIDI controller and volumes are adjusted with controllers 7 or 11. Writing an adagio is challenging because a slow piece should not drag, it should not feel like it is aimlessly and slowly moving about, but rather it should have direction and momentum, even if very subtle.  By examining the tempo map to this movement you’ll see there are many tempo changes, serving to enhance the musical flow and maintain the sense of direction.

SONAR Tempo Map from 9th Symphony

 

The 4th movement makes use of various software synthesizers, more than the other movements.  It makes use of pedal points and counterpoint and is based on only a few melodic ideas and motives.  The violins at the opening measures (m1-m25) might be considered the main theme and there are sub-themes that occur throughout the piece.  I also use a variety of percussion in this movement, including snare, tambourine, cymbals, gong, harp and a complete drum kit from EZ Drummer’s software.

I cannot stress enough the importance of MIDI controllers.  Phrase-shaping is critical to crafting a musical line that has variation in dynamics, velocity, note length, location relative to the beat, attack and release time and sample-switching.   As illustrated in the event list in my compositions, a lot of programming goes into effective phrase-shaping, it’s not just a matter of choosing the right articulation – that’s only the first step.   Depending on the dynamics, tempo and orchestral factors, sometimes deep programming is necessary to create a line that has fluidity, expression, naturalness and a sense of intention that comes with attention to detail. This is why composing and producing in this medium can take lots of time, the composer is not just writing the music, but also interpreting the music through programming and mixing. I set up my controllers so that controller 18 is assigned to attack time and controller 19 is assigned to release time.  These are the two components of the ADSR envelope that I use the most. To create a smooth legato line, particularly in the strings, adjustments of attack and release are often required, in addition to choosing the best sample-set for the passage in the first place.  Another component of phrase-shaping is velocity.  The emphasis on strong and weak beats is needed to overcome the sense of mechanicalness which always degrades musical expression; it is the opposite of intention.  The precision by which the computer can perform music is only a liability when the musician doesn’t understand phrasing.  We can introduce randomness through various means, including variation in tempo, with strong and weak beats, by displacing notes slightly before or after the beat, and, with VSL’s software player, with the pitch of attacks. Sometimes I’ll use a sample-set built on three trumpets; in other words the sample-set is a recording of 3 trumpet players.  A unison line employing such samples will sound fuller than a recording of just one trumpet player.  Other times I’ll write for three independent trumpets (let’s say they’re playing the same line in unison) and I’ll offset them both in time, by several milliseconds, and by pitch, detuning them by 5 or 10 cents or so.  This creates a chorus effect and adds depth, complexity and variation to the sound.  A sample-set can consist of thousands of samples—every note is sampled in numerous playing styles and numerous dynamic levels. 

When I use reverb, I use one reverb for the entire piece.  A mastering engineer once suggested that I use one reverb for the winds, one for the brass, one for the percussion and one for the strings.  I tried this for a while, but didn’t like the sound; I heard it as weakening the cohesiveness of the space. Using one high-quality reverb (I use the Yamaha SPX-2000) allows for greater connection between the sections, and then I can adjust how much or little reverb each section gets.   With instrumental music, I generally record the final wave file in stereo with reverb.

After I am satisfied that the composition is finished, I then proceed to check the MIDI sequence for errors.  I export the file into Sibelius and create the score.  Why do I create a score if live players are not involved?  1) The score helps me find mistakes, miscalculations and other issues that I may have overlooked while composing, 2) the score allows me to discuss my work with students, and 3) if I publish the piece for players I add the necessary breathing, phrasing, bowing, dynamic and articulation marks and the piece can be played by musicians.  Another important creative purpose of a musical score is that it brings a second sense, the visual, to the process of composition.  Though it’s obvious that the way music sounds is far more important than the way it looks on the page, notation allows the composer to consider the orchestration, harmonies, counterpoint, structure and textures of the piece in high-level detail.  The details I don’t put in the written score are those which instruct players how to play a given passage.  Since this information is programmed abundantly in the MIDI sequence, and since there are no players, I have no real reason to include these markings.  The great power of notated rhythms are their potential for intricacy, precision and detail, the downside being only if they’re interpreted and performed without gesture, expression and intention.  Notation allows for greater control of complexity and contrapuntal processes, something that overdubbing tracks doesn’t achieve to the same degree. 

After I’ve finished the score, I then render the MIDI performance into a wave file.  I generally create a stereo wave file; stems are unnecessary at this point if the MIDI sequence was programmed with sufficient care.  If I am working with singers or instrumentalists I make stems as well.  The final wave file uses volume envelopes, which I think of as the macro-level of dynamics.  Here the composer takes off his composition hat and puts on his conductor’s hat (or mastering hat if you prefer) and works with the overall volume of the various sections of the piece.  Rather than use compression, volume envelopes on the stereo wave file accomplish a similar goal, with a high degree of precision. I use Ozone 5 for mastering, and often apply EQ, stereo imaging and a small amount of harmonic exciter to the music.   I take my time and give myself a few days or even a few weeks or months to get used to the mix.  If I am not satisfied I redo the signal processing until I feel I’ve achieved the best results I can.  I don’t consider the project complete until I burn the final master and send it off to the duplicator. 

 

Jerry Gerber is a composer and music producer and has produced 13 albums including 9 symphonies.  He has scored soundtracks for film, television, animation, games, dance and documentaries, including all of the music for The Adventures of Gumby TV series and Loom, the well-known computer game by Lucasfilm Arts.  He resides in San Francisco where he teaches music composition and electronic music production.  His CDs may be purchased by contacting him at jerry@jerrygerber.com.

 

You may also be interested in

SoundBytes mailing list

Browse SB articles
SoundBytes

Welcome to SoundBytes Magazine, a free online magazine devoted to the subject of computer sound and music production.

 

If you share these interests, you’ve come to the right place for gear reviews, developer interviews, tips and techniques and other music related articles. But first and foremost, SoundBytes is about “gear” in the form of music and audio processing software. .

 

We hope you'll enjoy reading what you find here and visit this site on a regular basis.


Hit Counter provided by technology news