Interview with Daven Hughes of Amaranth Audio
It’s rare these days to encounter a new synth that does something you’ve never encountered before. We talk to the developer of just such in instrument, Daven Hughes.
by David Baer, Nov. 2013
It’s rare these days to encounter a new synth that does something you’ve never encountered before. It’s even rarer when your response upon first hearing the instrument is “Holy [word –of-choice-to-follow-Holy]!” But that was precisely my reaction upon first encountering Cycle, a recently introduced instrument by developer Daven Hughes who hails from Fredericton, New Brunswick. Even more remarkable than the stunning sounds that can be created with Cycle is the fact that it is the first synth offering from this developer. More information (as well as some very nicely done audio demo clips) can be found here:
But with innovation can come bewilderment when faced with unfamiliar technology, and with Cycle it becomes clear early on the “we’re not in Kansas anymore”. To be fair, at the time I’m writing this, Daven has yet to complete the on-line documentation, so to that extent, Cycle is a work in progress.
As can be seen from the image above (for a larger view, visit the web site referenced above), the GUI for Cycle is dazzling, but some may also find it intimidating. Things get even more so when reading through the help glossary and stumbling over terms like “vertex cube” and “intercept path”.
Fortunately, we’ve got the developer right here to lead us through the concepts behind Cycle, so let’s get down to business without further preamble.
Sound Bytes: Daven, thank you for taking the time to talk to us. Let me start with what may be a complicated question that hopefully has a less than complicated answer. Would it be fair to say that the bulk of the innovation that makes Cycle unique is to be found in the component most people would think of as the oscillator portion of a synth, but in the case of Cycle, that component also contains what would be the filter section of a more conventional synth?
Daven Hughes: It’s safe to say the method of sound generation is the most distinguishing feature in Cycle. Before I get into that, I’ll note that the rest of the synthesizer sections – envelopes, effects – are somewhat less alien, so sound designers will have a familiar place to start.
There are two ways Cycle creates sound. Similar to the classic subtractive synthesizer, it can generate sound with an oscillator. The oscillator is particularly flexible, because the user designs the wave shape with points and curves and so it can be as simple or as complex as desired. The twist is that this shape can also morph in a controlled way. To make a wave shape morph, Cycle lets the user define paths for the curve points to follow over time. This structure of points, curves, and paths is a bit like a wire frame, and the work flow of drawing wave shapes and creating paths is quite similar to 3D modeling.
The spectral synthesis component is the second way Cycle generates sound. As you suggest, this component also takes care of filtering.
Some spectral synthesizers let you operate on the spectral data of a sample, or use an image as the spectral source.
In Cycle, instead of analyzing a sample, it analyses the spectral content of the signal created by the morphing oscillator. This sets the stage for further spectral editing. That work flow of 3D design applies to the spectral domain too, but instead of wave shapes, you’re drawing curves that either multiply or add to the harmonic spectrum. Like wave shapes, these curves can also morph with time. Because you’re not tediously painting the spectrum with mouse-strokes, basic filtering is very straightforward – a low pass filter takes only two clicks, for example.
S.B.: So let me try to paraphrase to see if we’ve got this right. When you talk about drawing a filtering curve, you might be simply describing drawing a basic low pass filter response curve with a value of 1.0 below the cutoff and a declining slope to the right? This is then used as a multiplier across the frequency spectrum, acting on partial amplitudes? Also, resonance would be obtained by adding a “hump” in the curve at the cutoff point?
D.H: Yes, it’s as simple as that, with one detail: filtering curves are neutral at value 0.5 because they are bipolar, so the lower half attenuates and the upper half amplifies. That way a tall spike in the curve will boost the harmonics a lot. If the curve is in additive mode, the neutral point is at 0 and any part of the curve that is above that adds to the harmonic magnitudes.
Now, keep in mind that there are two halves to the frequency domain because each harmonic has a magnitude and phase. The Spectrum Filter has a separate mesh for the phase spectrum. With control of the phase spectrum you can get many great effects – you can easily widen the stereo image, create breathy evolving pads, or make a sound more organic.
Graphically, Cycle presents the phase spectrum over time unwrapped. Phase unwrapping means when a harmonic’s phase drifts, say from 0.96 to 0.99 to 0.02, Cycle knows that the phase didn’t suddenly jump down by a large amount, it went up slightly and wrapped down, so then it can restore the true phase, which in this case would be 1.02. With unwrapping, many patterns become clear; without it the phase spectrum over time is chaotic.
S.B.: In your online information about Cycle you speak of utilizing five dimensions in the construction of the output wave forms. I gather that time is certainly one of them, and then curves have two dimensions. Right so far? And if so, dare we delve into what the other two dimensions are?
D.H.: Part of the design challenge was to find a way to make sounds expressive. With just the ability to morph wave shapes over time, Cycle would barely improve on a sampler with a single sample mapped across the keyboard. Of course, the content of the synthesized sound might be spectacular, but it would be as limited as an audio sample in terms of expressiveness and keyboard range.
The solution was to allow the wire frames, those defining the time-morphing wave-shapes and spectrum filters, to themselves morph. So, the two other dimensions are key scale, so that you can tweak the wire frame to maintain the naturalness of the sound across the keyboard, and modulation, so that some range of expressiveness can be modeled. Think of how a piano note sounds mellow when struck softly and brilliant when hit hard and all the gradations between. This is what you can do with the modulation dimension.
S.B.: I’m not clear on what you mean when you use the term “wire frame”. Can you give us a simple example of, say, what the wave-shape frame consists of? Also, how many wire frames in total are we talking about here?
D.H.: There are three wireframes involved in sound generation — one for the wave shapes, one for the harmonic magnitude domain, and one for the harmonic phase domain. I think most of us have a mental image of what a wireframe is in 3D design and animation – it’s the set of vertices and connecting lines that make the skeleton of some form, some manifold. In Cycle-speak, I call this structure a mesh.
We’ve established that time is one morphing dimension and that each curve point follows a path over time. Say the ends of this path are points A and B.
Now let’s introduce morphing along key scale (i.e., simply stated, low notes to high notes). Instead of A and B being fixed points, they each have a path over the key scale, just like the curve point has a path over time. At the low end of the key scale, A and B can be in one position in the mesh; at the high end, they can be in another. At somewhere in between, the points will be a weighted average of the two fixed positions, the weight depending how close the current note it is to either end of the scale. When you do a glissando up the keyboard, the time-path transforms from one configuration to another.
There’s yet another morphing dimension, modulation. So let’s put it together: a curve point is on a path over time, and the ends of that time-path are on their own paths over the key scale, and the end points of those key-scale paths are on paths along the modulation range. Ultimately there are 4 configurations of the time-path that are morphed between as the keyboard note and modulation source change.
This is a bit like explaining how to tie a shoelace over the phone, so check out this diagram showing the morphing process:
S.B.: OK, those diagrams help make all this a lot clearer. Can we now turn our attention to the user interface? Give us a tour of the main areas.
D.H.: Let’s start with the Time Surface. There’re two panels that show two views of the waveform surface: a 3D topographic panel for editing the waves shape paths with respect to some morphing dimension, and a 2D shape editing panel for editing the wave shape itself. Above the 2D wave shape editor, there’s a thin magenta band. This is a helper to identify where the sharpest points of the curve are.
It is important to realize that the 2D/3D panels are just different presentations of the same mesh. As such, certain things are linked between them, like zooming and selected vertices. To the left of the editing region there are controls for the current mesh layer.
Likewise, in the Spectrum Filter, there’re two panels: one for editing the paths and one for editing the curves that follow the paths. Clicking the magnitude/phase buttons switch between the two modes and between the data that is visualized in the windows.
Above the Spectrum Filter is the envelopes section. To the left of the envelope graph are the controls to set sustain and loop points. Envelope curves are morphing, much like the wave shape and spectral curves, but the 3D envelope panel to edit the paths is hidden by default. When you select Key or Modulation as the displayed morphing dimension, in place of time, the 3D envelope editor is displayed.
On another tab behind the 2D harmonic spectrum window is the Deformer panel. Here is where you can create a detailed 2D curve that “deforms” a curve-point’s path, making it more intricate. Usually a path is just a straight line that connects the start and end points, but with a deformer, the path can be anything from an exponential curve to a noisy wobble.
In the upper middle part of the UI, here are the Position sliders that are important to the morphing workflow. There are three sliders–one for each morphing dimension, and in combination with the mouse position their value sets the 5-D cursor position. When you select a vertex (right clicking), the one closest to this cursor is chosen. So for example, if you wanted to adjust a curve at the high end of the key scale, you’d set the key slider position to max and do the edit; the vertices moved would be those belonging to the upper key scale configuration. [Refer back to the earlier morph diagram to see this illustrated – Ed.].
Below that, the Vertex Parameters panel displays the values of selected vertices averaged as a group. These can be selected vertices of any of the several vertex-based editing panels. In addition, it lets you assign deformers to the different parameters of the selected vertex cube. For example …
The effects tab sits behind the wave shape editor… Here are the wave shaper, impulse response modeler, unison, delay, EQ, and reverb. Thankfully they’re all pretty straightforward. Just a word about the impulse modeler: it’s not a reverb unit, but more along the lines of a tube amplifier effect. Impulses do not morph, at least, not yet!
Then finally there’s the Preset Browser tucked away in another tab behind the wave shape editor. In the future there may be many hundreds of presets, but navigating these should be easy with the browser. Just type in a keyword or two and hit enter.
S.B.: So how does this all come together in sound design? Can you talk us through the process of building a preset? [Ed. note: I sent Daven a single sample I cobbled together from presets in … well, let’s just say another synth … as grist to the Cycle preset mill. This is what he’s talking about in the next section.]
D.H.: I haven’t introduced layers yet, and they’re important to the sound design process. When I said there are just three wireframe/meshes, I lied. Each of the three oscillator domains actually has unlimited layers, and a layer is basically a mesh and some other properties like pan. Harmonic magnitude layers are special and have a couple more properties – dynamic range and mode (additive or subtractive).
So let’s jump in. After loading the reference sample, the first thing I notice is there are two patterns in the time domain.
Most likely, this is the synthesis of two detuned oscillators or sound samples.
After tweaking the wave-pitch envelope to straighten the patterns out, we can begin tracing them in the wave shape editor. After roughly outlining the first pattern, create a second Time Surface layer. I eyeball the wave shape of the second pattern and draw it. It’s phasing downward, indicating the voice is detuned flat. Time Surface layers cannot be detuned, so to replicate this I drag down the rightmost vertices in the 3D editor simulating a moving phase. This isn’t quite the same as detuning, as the phase drift is limited to the duration of the preset, but it’s close.
The high-frequency details of the wave shapes are hard to trace, and further, they do not scale well with key scale. Frequency domain patterns scale more naturally, so I’ll let the spectral filtering fill in these details. To that end, set the first harmonic spectrum layer to additive mode, add some peaks in the 2D editor to mirror the spectral peaks of the wave sample spectrum.
Auditioning some notes, I notice that high notes are far too bright compared to low. This is partly a consequence of the time-domain approach. To counter this, I’m going to add a filter layer that will boost the treble at the bottom of the key scale and dampen it at the top. So I add a harmonic magnitude layer and set its mode to subtractive.
Do these steps to create the filter in the 2D spectrum editor:
1. Define a mild filter roll off (should take just 2 points)
2. Deselect key linking
3. Move key position slider to its minimum
4. Drag the curve points to boost the high frequencies
5. Move key position slider to its maximum
6. Drag the curve points to reduce the high frequencies
Test some notes and see the effect. If it’s not enough, you can adjust the curve again or increase the dynamic range of the layer.
With harmonic magnitude layers, order matters. A subtractive layer filters all layers underneath it, so in this case, the additive harmonics of the first layer are being filtered by the subtractive, equalizing layer. This is not quite what I want, because I only want the wave-shape harmonics to be corrected over the key scale; the spectral details I added in the first layer are fine. So I want to move the subtractive layer down, and I do that with the up-down layer buttons in the layer controls area.
The attack of the sound is distinctive from the rest. We can see this visually in the waveform surface and the spectrogram, a chaos at the start. A study I read showed that the attack is one of the basic elements that give a sound identity, comparable to the harmonic spectrum, so it’s important to get this right.
First we can use deformers to create perturbations in the wave shape in the attack:
1. Click the deformers tab (next to the 2D spectrum panel)
2. Draw an impulse-like curve, something that oscillates and decays away quickly
3. In the Time Surface, select a vertex at the time-minimum position in the Time Surface
4. In the Vertex Parameters panel, assign the deformer to the selected vertex’s amp property
5. Add another deformer channel by clicking the ‘+’ button in the controls area, and repeat 2-4 for another vertex
The harmonic phase spectrum is also great for creating this sort of chaos. Hop into the phase spectrum view and in the 2D editor create a couple of points around the middle of the spectrum. Between these points we will assign a deformer to the shape of the curve itself. This keeps the mesh topology simple and makes it easier to work with a complex curve. So make some sort of complicated shape in a new deformer channel, select the leftmost vertex and assign the Freq-vs-Phase property to that new channel.
To complete the effect we must make sure the changes we made only affect the attack. When there’s a Freq-vs-Phase deformer assigned replacing the default curve arc, the curve sharpness parameter behaves like a volume multiplier for that deformer. Since the sharpness parameter is also deformable, this means we can dampen the deformer-curve with yet another deformer. A quick decay curve is what we want here, so create the deformer channel and make the assignment to the sharpness property. The result will be the rapid distortion of the middling phases, creating a sort of noise.
To finish up I’m going to use a scratch envelope to improve the phasing effect of the second wave shape layer. With a scratch envelope, I distort the timeline of the mesh with another curve. Since envelopes loop, I can sustain the phasing effect forever instead of it expiring after a short time. In the scratch envelope, create an inverted hockey-stick shape (see figure), select the middle vertex and click the ‘loop start’ button.
In Cycle version 1.1, different scratch envelopes can be assigned to different layers. The looping effect is irrelevant for all layers except the second in the Time Surface, so for the others set the scratch channel to null.
That’s it! Hopefully you can see how the sound design process is quite measured and linear. With a bit of practice, something like this sound can take 5-10 minutes to get down, where it may take an indefinite amount of trial and error with other tools. See what you think of the result. [Ed. note: the original sample and a brief clip of music using the Cycle preset constructed by Daven follow.]
S.B.: Well, as I said earlier, clearly we’re not in Kansas anymore. Cycle sound creation seems to be a process unlike anything I’ve ever encountered. So, how did all this happen? Where did Daven Hughes come from, both musically and technologically, that resulted in such a unique device for sound creation?
D.H.: I found FL Studio when I was in high school and tinkered with it constantly. I thought it was a fun challenge to keep everything synthesized, plus I was broke so I didn’t buy any other synths or sample sets.
So that involved a lot of sound design. Always I’d be referencing the waveform output to the sample I wanted to imitate, but it was baffling me how to get the waveform right from tweaking a dozen different synth knobs. Some synths could draw a wave shape or import single cycles, but the shape was then impossible to modulate, and it often sounded too gritty to be useful.
Soon it was clear there were classes of sound that were very troublesome to synthesize – those complex in wave shape and those varying with time, and almost all real instruments fall into this category. Basically, the idea was to create sound from geometry rather than trigonometry. It came from the need to bring those classes into the sound-space accessible to a synthesizer, and do it in a way that wouldn’t tax one’s patience egregiously. I think the measure of a synthesizer is not what it can do “in theory” but what it can do in 10 minutes.
Once I thought that converting images into sound was the way to go. Not by using the image as a spectrogram (as had been done before), but rather using it as the wavetable with each pixel column as a cycle. It turns out that our ears prefer much different patterns than we see visually, so the results were typically unpleasant. Still, that experiment inspired a more visual approach to sound design.
Part of the reason I went through for computer science was to develop the idea. I tried to integrate it into the degree wherever possible, like the curve design was something I did as a thesis project. So conceptually, Cycle was a long time in the making.
S.B.: Daven, thank you for taking the time to talk to us. We wish you the best success with Cycle.
D.H.: It’s been my pleasure.