Review – Reverberate 2 from Liquidsonics
Reverberate 2 raises the bar on the realism that can be obtained from convolution reverb courtesy of some very innovative thinking by its developer.
by David Baer, Sept 2016
Liquidsonics released the first version of the Reverberate reverb plug-in approximately seven years ago. It was an excellent convolution reverb that boasted advanced modulation options as its competitive edge. Now there’s Reverberate 2 which takes that capability vastly further. With this new release we have what is arguably the most breathtakingly realistic convolution reverb currently on the market, and the reasonable price makes it all the more attractive.
Reverberate 2 is available for Windows and Mac in both 32-bit and 64-bit compatible versions and all the usual formats (VST is VST 2, however). It lists for £80 GBP, and occasional sales with attractive discounts have been known to happen. Included are a reasonable collection of impulse files for various spaces and reverb types, plus two free downloads of additional (and exceptional) impulse collections are available – these are compatible only with Reverberate 2. We’ll discuss this important extra in some detail later.
We see the term “convolution reverb” all the time, but actually convolution is a general technique that is used widely in DSP for other than reverb processing. In fact, convolution is actually the application of a (sometimes very complex) filter, but since it is so marvelously appropriate to supplying reverb solutions, convolution is best known in the world of computer sound production as a reverb technology. In the next section, I am going to attempt to explain what convolution is all about. Those who already understand this subject may simple skip this section and proceed straight on to the actual review.
A Convolution Primer
Convolution is applicable to both continuous real-world signals and digital signals comprised of discreet samples. Convolution is not limited to just audio signals. Here, however, we limit our discussion to digital audio.
We must begin this discussion by talking about linear systems. A linear system has an input and an output. To be linear, it must follow several rules. First, a signal of a given amplitude produces an output of another given amplitude. That output need not look anything like the input in terms of waveform or timing, but if we change the amplitude of the input, the amplitude of the output will change proportionally. Secondly, if we send two signals through the system and sum the outputs, we will get exactly the same results as if we summed the inputs before sending the result through the system. Convolution can be used to duplicate the effect of any linear system.
Examples of linear systems include basic delay, filtering, and some kinds of reverb. Examples of non-linear systems are any things that do dynamics processing (compression, expansion, gating, etc.).
One other thing about linear systems, if you send the same signal through multiple times, the outputs are guaranteed to be the same each time, which is great unless you want a little variation, but we’re getting way ahead of ourselves on that point.
Now, in the digital audio world, a unit impulse is one like that in the figure to the right: a single unity-value sample is followed by an indefinite number of zero amplitudes. It may not look like it, but that impulse contains all frequencies. Suppose we send a unit impulse into a (linear) system that delays the output by a duration of three samples. The output would look the second image to the right. The output of a linear system that processes a unit impulse is called the impulse response.
So, we are ready to look at a first example of convolution specifics using this case of the three-sample delay. If we take the samples of the impulse response from sample 0 to the last non-zero sample and reverse it, we have the means to do convolution. In this case, our impulse response is three samples in length (this is a much simplified and not-very-realistic example for purposes of illustration).
Here in narrative form is how to convolve the impulse response with any signal. Remember, we have reversed the impulse response and it is three samples in length in our example. For descriptive convenience (only!) let’s call the reversed impulse response the “convolution-impulse”, hereafter “CI”. Picture the CI positioned over the input signal such that the rightmost sample of the CI is over the sample 0 of the input signal. This means that we have the left part of the CI sitting over empty space. Just assume sample amplitudes prior to the start of the input signal have a value of zero. Now, multiply the amplitude of all the positions in the input signal sharing a slot with the CI by the amplitude of the sample just above it. Sum the results of these multiplications and this becomes sample 0 of the output signal. Shift the CI one slot to the right. Rinse and repeat. We do this until the CI is positioned past the end of the input sample, at which point we are done. So, at the end, we have CI slots sitting above non-existent input signal slots. Again, just assume the amplitude values of these are zero.
In all cases, the output of a convolution process has a length equal to the sum of the lengths of the CI and the input signal minus one (the input signal will typically be much longer than the impulse response, but this process will work either even if the CI is the longer of the two). If you have trouble visualizing this process, draw a short input signal on a piece of graph paper and work through the successive steps, using the trivial three-sample-delay impulse response. Hopefully, you will shortly see what’s going on in the process.
Next, let’s look at another simple case. The impulse response to the right takes the average of the current and previous three samples in the input signal to compute the output amplitude value for each sample position. This is actual a simple low-pass filter. It works that way because input waveforms (think sine waves) at low frequencies compared to the sampling frequency change slowly from sample to sample. Thus averaging a short sequence of them will not change the output drastically. However, as the frequency gets close to half the sampling frequency, the averages of any four adjacent samples wil tend toward zero. So, high frequency signals get eliminated – just what is expected from a low-pass filter.
Take one final example. Look at the image below on the left. If you convolve that signal with itself, the output will look like the image to its right it. If you can see why, then you’re well on your way to having a fundamental grasp of the basics of convolution.
Now, let’s get real. In the real world of reverberation, our input signal (that to which we are adding reverb) is normally going to be many hundreds of thousands of samples in length. Furthermore, for a large space, like a cathedral, our impulse response is going to be five or more seconds long and several hundred thousand samples in length. But wait, then there’s stereo, so double that amount of multiplication and addition. But wait, then there’s true stereo (explained in a bit), so double that again. It’s clear that the number of arithmetic calculations involved for a reverb impulse of any length will overwhelm even our most powerful general purpose personal computers.
In the real world, convolution on your computer doesn’t work in the straightforward way I just described to explain the basics of convolution. Instead, some breathtakingly complex mathematical processing allows the equivalent to be accomplished in a vastly more efficient manner. Using Fast Fourier Transforms (FFTs), a convolution software process will transform chunks of the input signal, which exists in what is known as the time domain, into equivalent representations in what’s known as the frequency domain. Here, the equivalent to convolution is straight multiplication – but drastically less of it is needed than doing convolution in the time domain. Once an inverse FFT is done (frequency to time domain), the end result is exactly the same as doing it the long way in the time domain.
Make no mistake – the computer code to accomplish these marvels is very complex. Software engineers who take on convolution processing are not only exceedingly clever individuals, they are also quite brave (or foolhardy– but maybe that’s the same thing).
One last thing – from where do reverb impulse responses come in the real world? The classic description has somebody digitally recording the results of firing a starter pistol or popping a balloon the space for which the impulse is desired. That pop of sound is like our digital unit impulse: it contains a healthy dose of all frequencies. But more sophisticated solutions are available that avoid capturing invalid results due to various kinds of audio interference. If you have ever installed a high-fi or home-theater system with room correction, you will have experienced the “whoop, whoop, whoop” sounds that are sine wave sweeps playing out of your speakers when capturing the audio characteristics of the room. The impulse capture is repeated and spread out, and decoded by special software devoted to that purpose. But in the end, the results should be pretty close to that balloon pop or starter pistol shot.
A final point: an impulse file can actually be any audio. You can record, for example, crumpling up paper and use that a special effects convolution impulse image. That would not be anything close to reverb, of course, but convolution reverbs and actually general purpose convolution processors that happen to mostly be used as reverbs.
The Actual Review
There’s much to talk about concerning this advanced audio processor, but let me start by assuring potential users that to achieve what most of you will want, that being simply a credible but great-sounding reverb, the complexity will not be your enemy. Simply find a preset you like, tweak a handful of controls (if even that) and enjoy a glorious sound. But for those who like such activities, you have more than enough tweakability to keep you occupied for hour upon hour. Let me also observe that while most applications of this plug-in will be to introduce realistic ambience, there is more than enough capability to introduce over-the-top special-effects-type results to satisfy all manner of unusual requirements as might exist for soundtracks or other off-beat applications.
But let us focus on the more conventional goal of achieving realism. The UI of Reverberate 2, seen immediately below, is a tabbed affair, where most of the action will be on the tabs labelled IR1 Edit and IR2 Edit and the tab labelled Mixer. The file/preset browser can be permanently displayed (as in the image below) or collapsed (as in all the other images). I have chosen here to use the default skin, but several others, including several with much lighter UI choices are available.
Reverberate 2 offers two IR processors, the outputs of which can be mixed, optionally with the mix levels modulated. True stereo is supported. Normal stereo involves two channels of IR information, one applied to the left audio channel and the other to the right. True stereo uses four channels of IR information: IR-on-left-input-to-left-output, IR-on-left-input-to-right-output, IR-on-right-input-to-right-output and IR-on-right-input-to-left-output.
Where true stereo IRs are not available, several techniques are available for simulating true stereo. These are well-covered in the quite-good documentation, so we’ll say no more on that here.
The truly exciting feature in Reverberate 2, the thing that distinguishes this reverb from all others, is the Fusion option. Recall that the chief limitation of convolution reverb is that, although we can get an incredibly realistic audio snapshot of a space, it is frozen in time. In real spaces, moving air, audiences and other factors can introduce constant variation in ambience characteristics. Algorithmic reverbs can introduce modulation to various factors to mimic real-life variations. But modulating the characteristics of an IR is far too processor-intensive to work in real time. Enter the Fusion solution.
Liquidsonics solves the problem by providing an IR format that contains multiple IR snapshots in a single IR file and the wherewithal in Reverberate 2 to utilize that information, internally modulating between the multiple internal IR snapshots. Liquidsonics does not reveal much about what’s in its secret sauce here. We are not told, for example, if multiple means just two or means even more than that. We are not given any details of what is being modulated. Is it just mix levels or maybe frequency-spectrum-specific modulations? But it simply sounds glorious and that’s all that really matters.
At the moment, the Fusion IR format is found only in IRs supplied by Liquidsonics, but the format will soon be made public and third party Fusion IRs may one day be available. But those currently supplied with Reverberate 2 should be more than enough to satisfy most users’ requirements.
There are currently two collections of Fusion IRs available from Liquidsonics – they come as free downloads and not as part of the basic install. One is a more conventional collection of rooms, halls, etc. The other is a collection made from running signals through a Bricasti M7 hardware reverb, and this one is the real show-stopper. This IR collection was done apparently with the blessing of Bricasti – and why not, since it brilliantly shows off the finesse and elegance of this fabulous outboard processor. But the M7 costs well over $3500 USD and will always remain far out the budgets of most home studio producers. Thanks to Liquidsonics, however, we have the next best thing at a very affordable price.
One point is noteworthy: none of the Fusion presets take advantage of more than one IR Edit slot. They invariably just use the first. The dual IR capability is probably of greatest benefit when using non-Fusion IR files. With the Fusion option, the extra capability is just not needed because we already have an internally modulated process happening in just the first IR processor. The size of the Fusion IR files is a clue as to how much is packed into one of them.
Even if Reverberate 2 offered no other bells and whistles, if would be well worth the price for the Fusion capability and the two Fusion IR collections alone. You simply have to hear the results to understand just what a treasure you have available.
But let’s move on to other things. Much more exists on the IR Edit tabs. We have the ability to impose an ADSR-like envelope on the IR shape, and we may alter the length of the IR. Almost anything here can be automated from your DAW, but anything that requires a recomputation of the IR data should be avoided in real time – there’s just no way your processor could keep up.
Two sub-tabs of the IR Edit tabs allow for the generation of ER segments and reverb tail segments that conform to certain characteristics. These synthetic ER and tail segments can be used to augment an IR (by placing them in the extra IR Edit slot) or they can be used on their own. A set of modelling parameters is provided to get things started. For example, the ER generation begins with specification of a space type: Grand Hall, Roman Dome, Chamber, etc. Additional parameters are provided to control density, distances, and so forth. The ER and tail tabs are respectively shown below.
Each IR Edit tab has an associated EQ tab, as seen below. With these, we can not only set static EQ characteristics, but can actually introduce sweeps, which produces a very synthy-sounding result – a spectacular special effect even if one used only rarely. A Post EQ tab is also present that can be used on the mixed output of the two IR processors.
Let’s now jump ahead and look at the Mixer tab (shown below). This brings me to one of my few criticisms of Reverberate 2. Notice the two knobs in the upper right labelled Gain and Dry/Wet? The mixer tab is the only tab offering Dry/Wet control, which is bad enough – this control should be globally available. Even worse is that Dry-Wet is in the same position as Gain on every other tab. First time users can easily accidentally do something nasty to their ears and/or speakers when they wish to increase the Wet amount and crank up the gain by mistake – it happened here, so do be careful.
Notice the signal flow diagram in the lower portion. You can see that two IRs can be run in parallel or serially, which opens up some opportunities for super-ambient, deep-space effects. We also have a chorus and a delay, both with plenty of options. I’m not going to go into detail on these here. They are well presented in the manual and they are great at adding further movement to the output of the IR process. Let the UI image of the Delay tab suffice for the moment.
Is Reverberate 2 for You?
By now I suspect you’ve gotten the idea that I think Reverberate 2 is pretty special. I already had a handful of very fine convolution reverbs installed on my DAW computer, and some of those were accompanied by superbly-produced IR collections that covered the breadth of types of natural spaces. I think there’s a very good chance I will never feel inclined to use any of them again, that’s how spectacular Reverberate 2 sounds to me. In fact, if there were to be no further advances in reverb technology in my lifetime, I would feel no disappointment.
Oh, I will still certainly use algorithmic reverbs here and there, especially for specific FX situations. My 2CAudio Aether and Valhalla Plate will certainly not gather dust. But for convolution, it’s probably going to be Reverberate 2 all the way from here on out. Most of us, I suspect, would have a hard time justifying the expense of adding yet another reverb to the collection of them we already possess. I would make the argument that Reverberate 2 is so special and unique, that this natural instinct should be suppressed in this case, at least to the point of auditioning the demo version. Just be sure and grab the Fusion IRs which require separate downloads, or you’ll miss the most important point of the exercise.
Enthusiastic thumbs up on this one. For more information or to purchase, go here: