Monthly Archives: November 2013
What if you could sample the character of FX units or signal chains the same way that you can sample instruments and recordings? Acustica Audio’s Nebula is a product designed to do exactly that.
by Per Lichtman, Nov. 2013
The Nebula Concept
What if you could sample the character of FX units or signal chains the same way that you can sample instruments and recordings? Acustica Audio’s Nebula is a product designed to do exactly that. Back when I first tried the product years ago, the potential for the platform was hinted by in the bundled library. But there are now around a dozen third-party Nebula library developers listed on Acustica Audio’s site with over 200 libraries available for sale at the time of writing, running the gamut in price from free to a few Euros to premium bundles over the 100 Euro mark. So now you can really hear what Acustica Audio Nebula can do.
The Short Version
This is going to be a long review because Nebula is very different from other products, so I’ll skip ahead to the good part for a moment: if you are using Nebula the way it is designed, then it can get you closer to the character of the sampled hardware than any other product I have tested so far (and there are a ton of sampled hardware libraries you can buy). That’s why it gets used on so many tracks I work on now. What’s it not designed to do is deal well with really loud input signals (you will need to turn down the output on a lot of soft synths) or work at the tracking stage (it adds a little latency). To get the most out of using it, you will want to use it either during mixing, mastering stages – or alternately using batch processing at any point after tracking, or during sound design. If you find yourself buying (and wanting to use) all the different 3rd-party libraries, you’ll also need a lot of memory and hard drive space, and optimally a fast processor. Several of the 3rd party libraries (including several from Cupwise and Gemini Audio) have been future proofed by including high CPU usage modes that currently require offline rendering, but that could be real-time capable on future systems. Now let’s get a bit more in-depth.
Nebula’s Strengths and Workflow
Out of all the things Nebula can do, perhaps the simplest to understand is the ability to “color” a signal. When dealing with outboard gear, every addition to the signal chain colors the sound somewhat, often in very subtle ways. When working entirely “in the box” on a computer with algorithmic manipulations, this is a nuance that sometimes gets lost. Nebula is not a traditional algorithmic plug-in. Algorithmic plug-ins have to create models to approximate the signal behavior and the complexity, quality and behavior of these models vary greatly, not only from developer to developer but also over the course of time for the same developer, relying on the developer to differentiate between important and unimportant variables. By contrast, Nebula provides tools for 3rd-party library developers (and all of its customers) to sample a signal path to record the exact characteristics of the way a signal gets modified in a given chain, within the limitations of the platform (more on that later) and to then be able to apply it to anything else. It really is an “FX sampler.”
By contrast to normal convolution techniques, Nebula sampling can keep track of the differences in the way a signal is processed at different input levels and it even supports modulated effects. It can smoothly interpolate between different settings in the hardware that is being sampled or different positions if physical space is being sampled via microphones instead. The technology is great at handling the sound and behavior of preamps, microphones, consoles and other gain stage and color devices, but over the years it has also gotten much better at handling dynamics as well – to the point where I sometimes choose it over expensive algorithmic compressors in my studio for certain tasks as well.
It takes a bit of adjustment to get used to it (EQs are normally sampled and operated a band a time; metering varies greatly from program to program; etc.) but it’s incredibly effective at both strong and subtle coloration and with an incredible array of colors.
Getting Started With Nebula
The key to figuring out whether Nebula is right for you is to set the thing up to get an accurate impression from the get-go. In writing this review I spent time talking with Acustica Audio, the 3rd-party library developers, experienced Nebula users and scouring the forums to see what sorts of experiences people had (both positive and negative). I compared their own experiences and suggestions with my own controlled A-B testing and came to a few conclusions about how to quickly get a sense for whether Nebula is right for you.
First of all, here’s a very common mistake that people make when using Nebula: they assume that it can be accurately evaluated by using it just like their other plug-ins. Nebula is in some ways more like outboard analog gear in that it’s very level dependent, but unlike that gear, it is simply not designed to handle overloading well. Let me repeat and re-emphasize that: you cannot judge the sound of Nebula by sending it a signal that overloads the plug-in. Nebula is designed to be used with headroom leftover (just like an awful lot of traditional hardware mixing), not to smash the signal up and past -0dBFS. If you do that, you go outside the range of the model and you’ll get horrible artifacts that don’t have anything to do with the sound that’s being modeled. There are a ton of plug-ins that are designed to handle sort of workflow (we’ve got upcoming reviews of URS Saturation, Studio Devil VTP and SoundToys Decapitator as a few examples) but Nebula is designed to excel at signals with a normal input level (and output level) range.
But if you aren’t supposed to max out the input and output levels in Nebula, how do you get a thick sound out of it? First, you need to pick a preset designed for the sound you like. Most of the 3rd-party libraries I was provided with came with some information about what the optimal input levels were, but generally speaking I found getting my input level peaks to land between -18dBFS and and -12dBFS provided consistently good results. You could get even better results for some libraries by optimizing the levels further and some libraries were designed to get an especially thick sound with peaks hitting in the -12dBFS to -6dBFS range. Long story short: when in doubt, make sure you’ve got at least 12dB of headroom and you are unlikely to run into problems but you can get more technical if you want (and VU style meters can really help).
Once you’ve got the signal being processed in the optimal range within Nebula, you can boost the output level with a fader (or fader plug-in) afterward if you need it to be hotter in the mix. So, here are two abbreviated cookbook approaches:
Workflow for Algorithmic Plug-in Designed for Hot Levels: (Optionally Attenuate Level to allow boosting more with plug-in)>Process with Algorithmic Plug-In
Workflow for Nebula: Attenuate Level > Check Meter> Process with Nebula > Bring Volume Level Up Outside Nebula
With that out of the way, let’s start with one of the best Nebula presets. The library included with the commercial Nebula versions should be thought of as the “General MIDI” of effects hardware sampling. In other words, there is a huge variety of sounds (and some of them are quite usable on their own) but almost all of them pale in comparison to the dedicated libraries you can get for the platform now. Much of the sampling for the library was also done at a much earlier stage in Nebula development so they don’t take advantage of the latest features. I mainly reach for the sounds in the bundled library when I can’t find them in any of the additional ones.
For example, in the Tape section of the bundled library, you’ll find presets labeled Tape 5042. These are sampled from a Rupert Neve 5042 unit, which you can also find in AlexB’s TSX library (along with other units). If you use A-B processing with the bundled library vs. with AlexB’s, the difference is quite striking: AlexB’s is obviously superior.
If you want to hear Nebula’s strength in reverb for free, I suggest going to http://www.vnxtsound.blogspot.com/ and downloading the free 2.5 second EMT 140 Plate program from the full library. Note: this sampled from an actual real plate, not a hardware box or plug-in. Here’s how to make it run correctly once you install it.
– Open the Nebula3 Reverb plug-in in your DAW.
– Go to the MAST page.
– Click next to “MODE” where it says “0 Simple” and drag upward until it says “1 Guru”.
– Click the second to last parameter on the right list that says “DSPBUFFER” drag it upward to 8192.
– Click the “Save” button inside Nebula in the upper right corner of the LCD display.
– Remove the Nebula plug-in from your session and reload the plug-in.
– Click on the word “Init” then navigate through Reverb> VNXT>EMT 140 Plate and click on the 2.5 second program.
There are more great plate sounds available but this is really is a good indication of just how high quality reverb can get on Nebula.
So why do I love Nebula so much? Because of the combination of variety and quality of the sonic possibilities it offers. It’s about being able to quickly compare the sound of gear that was sampled from across the world (some of it one of a kind) and having the models sound so excellent. It’s about having both really subtle and really strong coloration available equally easily and having so many developers releasing new libraries for the platform all the time – many of them quite inexpensive. In essence, I can buy a Nebula library for almost any occasion.
Nebula can be beautifully subtle but it can also be very dramatic when you want it to. Let’s take a look at a couple examples of getting strong coloration with Nebula.
Need to make a pristine digital virtual instrument synth sound more vintage? Throw a strongly colored synth preamp program on it, like one sampled from an Oberheim or Moog by SigntalToNoize. Or if you wanted a bit little less grain and vibe but a bit more exaggerated bass you could use the pre from the Korg MS20 that AlexB sampled in Vintage Synth Filters. After that you have several options for adding additional coloration. Do you want to capture the sound of analog tape or a cassette using an appropriately named program from SignalToNoize, CDSoundMaster or CupWise? Or would you rather capture the sound of one of the effects units in AlexB’s TSX (which are designed to warm up the signal in various other ways)? Or would you prefer to explore the sound of the preamps or transmission approaches of using a radio using one of CupWise’s innovative libraries? There are tons of compelling possibilities at your fingertips and by setting up a couple tracks with a Nebula instance on each, you can quickly compare and contrast the results with each in a way that normally takes much longer to do with actual hardware.
Here’s a quick primer on the amount of coloration with aforementioned tape libraries or the tape style FX in AlexB’s TSX (which you should probably skip if you’ve used actual tape). When programs mention IPS (inches-per-second) it has to do with the speed at which the tape goes by during recording and playback. The over-simplify greatly, the higher the speed is, the more “accurately” the high frequency content gets recorded. Lower speeds will sound more obviously colored than high speeds, so my suggestion is to start by auditioning the lowest speed offered for the unit you have in mind in order to really be able to hear the coloration, before trying to find the optimal speed for your content. All the units that supported 7.5 IPS or slower were able to color the sound in an obvious way if desired – 15 or 30 IPS tends to be good when you want less obvious coloration.
For more subtle coloration, you can load up a high quality modern console or preamp. Such gear was designed to maintain signal integrity using modern techniques so the coloration is often much more subtle than with vintage gear – but no less important.
As one top coach said: “Shaving a few tenths of a second off the 100 meter dash for a beginner won’t help much. But if I can do that for an Olympic athlete, it can make the difference between a good effort and a medal.” When it comes to using subtle coloration, it won’t make a bad mix sound good. But it can be used to help take a sound or a mix and push it right over the edge from “really good” to “nailed it”. It can also make the process of mixing easier.
Included vs. Additional Libraries
The majority of users are more likely to buy Nebula for the libraries currently available than to try to roll their own. That is why the huge collection of libraries currently available for sale (as well as several additional free ones) is so critically important. Across the board, I found developers had taken advantage of their greater experience with the sampling process (and the evolution of the Nebula engine) to create progressively more powerful libraries over time. In other words, it’s often best to start by looking at a developer’s most recent libraries to get a sense for what they are capable of.
Some of the easiest libraries to use are the preamps, consoles and microphone libraries. With these libraries you can essentially load the program and get “the sound” without having to tweak settings. The large catalogs assembled by developers like CDSoundMaster, AlexB, Henry Olonga and SignalToNoize each include many types of libraries, while many other developers currently cater to different niches. For example, ranscendingMusic’s libraries (http://www.tmusicaudio.com/nebula.html) cover various types of specialized outboard gear, from enhancers and stereo field modules to a saturation library made using the Thermionic Culture Vulture (one of the same units modeled in algorithmic fashion in the competing SoundToys’ Decapitator, which we will cover in an upcoming review).
The bundled microphone library is outdated and unrepresentative: the 5 kernel versions here cannot compete with the 5 kernel versions in top libraries, let alone the “full” ones that average 10 kernels. I would suggest not even downloading it.
The three microphone libraries currently available from http://www.gemini-audio.net/ have great sound quality, are easy to use and don’t take up that much space (the zipped download is less than 60MB per model). While I’ve talked a lot about subtlety there’s nothing subtle about the difference between these models and the ones in the library.
Henry Olonga’s http://www.nebulapresets.com/ also features a great collection of high quality microphones as well.
Between Henry Olonga and Gemini Audio, the Royer microphones are especially well represented and have the highly desirable ribbon color.
Perhaps one of the most appealing applications for Nebula is the collection of console libraries. At time of writing, four third party developers have created console libraries: Alessandro Boschi (AlexB), Analog in the Box (AITB), CDSoundMaster (CDSM) and SignalToNoize (STN). Surprisingly, there appears to be no overlap in the models between them, meaning users currently can buy over 15 consoles (18 if you count the EQ/console combo options) to have at their fingertips. There are only a handful of companies outside Nebula sampling that have tackled the “virtual console”, and even they would have to acknowledge that the variety available on the Nebula platform is unmatched by any other hardware or software plug-in to date.
Here’s a bit of info on the consoles, taken from the unofficial (and unaffiliated) list at http://www.learndigitalaudio.com/blog/nebula-vst-universal-program-explorer – the accuracy has not been verified and the names are generally not advertised by the developers. AITB’s Tube Console Bundle emulates an unspecified custom console and STN’s offering comes from the sound of the 1960s American Langevin AM4A (or L-401). AITB’s Tube Console Bundle emulates an unspecified custom console and STN’s offering stems from the sound of the Langevin AM4A. CDSoundMaster offers 5 consoles (including ones from Trident, Sphere, BBC, MCI and an unspecified British manufacturer) as well as two EQ (an Amek 9098i and another which a senior forum member indicates stems from Orphan Audio Hardware).
The largest number of consoles comes from AlexB: two Neve consoles, another Rupert Neve Designs one, two SSL desks, an API, TL Audio and even the Neumann console model used to master Pink Floyd’s Dark Side of the Moon.
As an audio engineer, it really is difficult for me to say enough good things about these specific libraries. High profile studios generally don’t have more than a few of these (some rely on one) and here are over a dozen that you can run on your computer without additional hardware, in one place, without having to travel, and hear them in seconds. I was a little skeptical as to how big a difference these would make – I thought it might be subtle stuff that only audiophiles could hear. Well, maybe you won’t hear the difference right away if you are listening on laptop speakers in a noisy room – but play these on any decent set of speakers (or even just a $200 pair of headphones plugged into the lowly headphone output on a few year old MacBook Pro) and the difference made by adding just the mixbus/direct out/master output stage to mix before mastering is very noticeable. The difference only gets more dramatic as you start to apply it as intended.
Each console include programs to use per channel (generally suggested as the first FX but useful anywhere in the chain) on each instrument and another to be used on the master bus. Some of them, such as AlexB’s Vintage Blue Console (VBC) and Modern White Console (MWC), go even further. CBC and MWC provide 6 or more different line input channel programs, two microphone input programs, multiple variations of the final output stage coloring (often through tube switching), a separate sampling for the FX send-return bus the console used and several additional programs intended for user on bus/group for a given collection of instruments (such as the synths and pads or the drums, etc.) The cumulative effect of applying these stages to the different parts of the mix can be really dramatic and I was honestly surprised by the sheer depth of the sampling in many cases. Here’s a suggestion for how to get a sense for it: apply just those effects as intended to every part of the mix, adding no other FX and then render the mix both with and without the FX. Normalize both and listen. In my experience the biggest difference came in the sense of space and the tonal character, more so than in terms of volume or “punch”. Try doing the same thing with two consoles of vastly different characters, like CDSM Globe console and AlexB German Mastering Console. I knew there would be a difference, but I didn’t expect for it to be as great as this.
I find that when using the console programs on raw recordings, I don’t need to do as much EQ work and that the tonal balance often seemed to be easier for compressors to handle. I did many A-B tests experimenting with different FX orders in my signal chain just to be sure, and sure enough, the console programs most often sounded better before the compressors (or other dynamics processing) were engaged.
Each of the libraries sounds different and has a character of its own, with my colleagues demonstrating repeatable preferences in blind testing. From the “radio-friendly” sound of AlexB Classic Logic Console and Modern Logic Console, to the far more relaxed highs and sense of space of CDSM’s Trident to round (and noticeably less colored) sound of AITB’s Tube Console Bundle.
I’d like to take a short moment to dwell on STN’s offering. Eric Beam at STN’s L401 sampled really sounds quite different from the other consoles. I found it to be very useful on string quartets, for instance, that had too much air or rosin noise or had somewhat exaggerated dynamics. On drums, it makes things less bright and splashy. It can be great for taking some of the “digital edge” off drum machines, for instance. It’s useful for many other things too. Essentially, it feels like the soundstage is both a little closer and a little smaller, with bit less dynamics and a smoother sound. It’s my “let’s simplify, reign things in and focus a bit” console choice. Warm, smooth and focused
Delays (much like reverbs) require a larger DSPBUFFER setting than most other program. In my testing I used 8192 samples, but I suggest experimenting with different values to find the one best suited to your system. Note that running your session at a higher sample rate means a proportional increase in CPU.
In addition to Cupwise’s feature-rich “Pioneer Analog Echo / Reverb” library (discussed in the reverb section), there are currently three dedicated delay collections for Nebula, all offered by STN at prices ranging from $18 to “free”. Once installed, they are found in the Delay (DLY) menu inside Nebula. Each is full of character and brimming with vintage flavor – as well as vintage eccentricities. They are light on controls (dry and one or two FX level controls mainly) but you’d be hard-pressed to find recreations of a Maestro Echoplex or ADA TFX4 in DSP anywhere else. The MoogerFooger Delay has been emulated elsewhere before, but rarely with such a “warts and all” approach to the detail of the sound. Throwing one of these into a mix is like opening up a time-machine to a different era. Levels can be tricky (after all the original gear used included feedback) but it’s worth spending the time to get it just right. My advice – when in doubt, cut the input level and boost the output level (even if you have to do so outside the plug-in to avoid clipping). The Plex (Tape Delay) is taken from a Maestro Echoplex with programs sampled at various delay times, with feedback. The programs sounded great to me, full of a vibe and character that’s quite different from normal tape-delay plug-ins. If you’re mainly looking for tape delay with lots of vibe as opposed to an exact timing beyond those covered in the presets, this is the one for you.
The downside to Plex (as compared to a more modern plug-in that might have less character) is that you must give up niceties like tempo-sync, feedback control, or even basic timing control. Instead you have 7 presets. Honestly, I have yet to hear a more convincing vintage tape-delay anywhere else and if it had those niceties, it could probably supplant all my existing tape delay plug-ins. But luckily, the presets cover a range of timings: 439 ms, 380 ms, 312 ms, 245 ms, 180 ms, 123 ms and3 ms.
The Plex programs could act up at times on my review system with unintended feedback building up randomly, but hopefully it’s an isolated (or easy to fix) problem.
The “Analog Doubler” library comes from an ADA TFX4 machine, is offered in three settings and does pretty much exactly what the name would lead you to expect. I didn’t really think I needed a doubler but found this one quite effective at thickening up voice and drums. It’s a lot better than I expected from a free library.
I didn’t receive a copy of the MoFo delay in time for this review so I will try to cover it in a future issue. From the demos I heard, it seemed to continue STN’s attention to capture the vibe of the original gear.
At first I was slightly disappointed with the filters selection for Nebula. After all, the offerings in the bundled library didn’t compare favorably to plug-in alternatives and none of the commercial libraries I tried were capable of the screaming resonance of my favorite plug-ins like Cytomic The Drop. But then I noticed what I’d been missing: filter preamps. STN and AlexB both offer a collection of filter preamps that work well with any other filter your throw at them – and by their very nature, they often have a stronger color than a microphone preamps. STN offers several different ones (Moog, Oberheim, Arp) as well as preamps from various other synth-related gear (drum machines, etc.) that are all quite inexpensive. AlexB’s filters are included in the Vintage Synth Filters collection.
You can also find an exotic Urei filter at http://www.cupwise.com/cup/ that is incredibly smooth and has a very gentle slope (as well as being able to get some really crazy modulation effects, etc.)
More on the Libraries Soon
From AlexB’s variety of well-known consoles, to CDSoundMaster’s range mixture of more unusual and one-of-a-kind ones (as well as the Trident sound associated with some of Michael Jackson’s top records) to the eclectic collection of gear in the SignalToNoize archive (especially interesting to anyone looking at vintage synth and FX gear). From the “pushing the envelope of what a Nebula program can do” approach in Cupwise’s releases (which often add parameter controls not seen in any other Nebula programs) to the amazingly comprehensively sampled API 2500 compressor library offered by Gemini Audio. From Gene Lennon’s faithful recreation of some of his most used TC System 6000 presets (which sound eerily close to the original from the comparisons he provided), to the outboard reverb emulations by STN, to the multi-position and mutli-mic Theater of Life by Roomhunters, to the plates from VNXT, Cupwise and Tim Petherick. From Ownhammer guitar programs (which I used to great effect on the sessions by the excellent Jimmy Haun) to TranscendingMusic’s range of processors and FX units (including great stuff for free). From the excellent Silk EQ by Tim Petherick, unique Manley sound in AITB’s Mammoth EQ to the air EQ presets from Henry Olonga. Not to mention the array of microphone models offered by Henry Olonga and Gemini Audio… there’s an overwhelmingly huge selection.
We’re coming back for more in the coming months, but here’s a list of the developer websites and at least one of my favorite libraries from them.
AlexB: German Mastering Console (GMC) and Vinylizer (VNL). Keep your eyes peeled for the upcoming Modern Flagship Console (MFC).
http://www.analoginthebox.com (should be back soon)
AnalogInTheBox: Mammoth EQ
CDSoundMaster: Apex Tape, R2R, TapeBooster+
CupWise: Custom Plate, Smooth 609A BusComp (off a Neve 33609A), Urei emulation
GeminiAudio: API 2500 Compressor or Royer 122V microphone library.
Gene Lennon: TC System 6000 reverb collection (GLP)
Henry Olonga: Microphones (almost any but definitely the Royers) and the EMI preamp. Keep your eyes peeled for their free upcoming Bricasti M7 reverb library in conjunction with CDSoundmaster.
Note: Part of the proceeds from the libraries sold at NebulaPresets.com go to supporting the Mumvuri Project, an orphanage in Zimbabwe.
OwnHammer: Any of the Studio Mix Libraries.
http://roomhunters.net/ (Currently unavailable but Acustica Audio is working to host this in their shop soon)
RoomHunters: Theater of Life multi-position real life reverb collection.
SignalToNoize: Oberheim and Moog Preamps, Plex Delay, Analog Delay
TimPetherick: Silk EQ, ELC24 V2 EQ, Spring (SSP)
TranscendingMusic: VCult (from the Thermonic Culture Vulture)
VNXTSound: EMT 140, BX20
Zabukowski: Creator of Nebula Setups 2 and NebulaMan utilities for Nebula – not libraries.
[Update 2014: Acustica Audio has added another official 3rd party developer, Lars Rüetschi, whose guitar and bass oriented Nebula programs can be seen at http://www.thesessionguitarist.com/nebula-programs.html but I have not had a chance to audition them yet.]
Come Back Soon
It should be noted that during the course of this review, two of my favorite Nebula developers were in the process of making changes that mean their libraries are temporarily unavailable for sale. [Update: Acustica Audio has stated that they are in the process of trying to make 3rd party libraries, including these, available for sale through the store at http://www.acustica-audio.com/, so check there as well to see if they become available.]
First, AnalogInTheBox.com – one of the oldest 3rd-Party Nebula developers, founded by Niklas Richter (formerly known as Velinas, the moderator of a very popular forum that used to create and provide cool new Nebula programs before the creation of Analog in the Box). While Niklas still provides access to the site for existing customers and is in the process of finding someone else to take care of the site and brand so that people can start buying the products again. I hope this happens soon as the challenge and response system for the libraries presents certain issues at present (like limited authorizations and some compatibility issues with newer Nebula versions) – something that wouldn’t matter if libraries like Mammoth EQ weren’t so good that I wanted to be guaranteed to be able to use them. In fact, Mammoth EQ is both uncommon and one of the best sounding approaches to the Manley Massive Passive currently available anywhere (with only 3 other emulations existing on any platform that I’m aware of). So stay tuned for its return.
At the time of writing RoomHunters.net had their site temporarily unavailable, but they plan to return soon – so my suggestion is to pretty much just check them every day until they came back. 😉 What RoomHunters.net offers with Theater of Life is unlike any other Nebula library and has few peers in the reverb world in general, sharing the most in common with something like VSL’s much more extensive and expensive MIR. To be clear, Theater of Life does not sound like a large concert hall: it sounds like a small theater and the miking is more on the “full of character” than the “pristine and clear” side of the equation. It has very low CPU usage compared to other Nebula reverb libraries (and actually relatively low for reverb in general with the right settings) and uses around 10 microphone programs (one of them mono) with panning and distance controls that take full advantage of Nebula’s interpolation capabilities to give you real-time automation capabilities for distance and placement. I definitely encountered some phase issues, but the library is extremely inexpensive and on the right material (or for something that just needs an authentic small theater vibe) the result is compelling, realistic, low-CPU and easy to use.
The Different Nebula Versions
There are many Nebula products now available, from the core product range (Nebula 3.6 Free, Nebula 3.6, Nebula 3.6 Pro, Nebula 3.6 Pro Server Bundle) to standalone products using the so-called “Acquavox” Nebula engine (like the recently released “Stradipad” collection). So the question is: which product does what? Which one should a new user be looking at? We’ll work our way down from the top to make it simpler. As a quick note, the products are all labeled in the Nebula shop with their full version number only, but since the 3.6 release is quite different from the 3.0 release I originally encountered years ago, I will refer specifically by version number here. Note that this review was written using the Nebula 3.6 Pro Server Bundle.
At the top, there’s Nebula 3.6 Pro Server Bundle (currently €189.00). This is the flagship model and will run every Nebula library released with the highest performance and highest memory efficiency currently available, with support for Windows and Intel Mac OS X in 32-bit and 64-bit. Unlike the other paid versions, Nebula 3.6 Pro Server Bundle consists of two parts: the normal plug-in and a server application. Since the bundle comes with multiple licenses, the server application can be run from a different computer than the plug-in (as long as you start it before loading the plug-in), allowing you to use additional resources on another computer. I did not have occasion to test this during the review process, but did ensure that the server application functioned properly when run locally. When the plug-in is used without the server application, it simply functions like a souped-up version of Nebula 3.6 Pro. Both Nebula 3.6 Pro Server and Nebula 3.6 come with the N.A.T. 3 sampler application to sample your own hardware for use with Nebula. The next step down is the Nebula 3.6 Pro Bundle (currently €139.00). During my testing, all libraries I encountered supported Nebula 3.6 Pro (though some did not support the less expensive Nebula 3.6 Bundle). In other words, this is the least expensive version that will give you access to the full Nebula range on both Windows and Intel Mac OS X in both 32-bit and 64-bit. Since Nebula 3.6 Pro comes with two licenses, the main difference compared to the more expensive Nebula 3.6 Pro Server Bundle is that you don’t have a server/local server application you can run on another computer and there is no memory sharing code. This could be thought of as the version with the most bang for the buck and is the lower price-point at which you are assured the “full” Nebula experience. As mentioned earlier, it comes with same effects sampling hardware as 3.6 Server (N.A.T. 3) .
The least expensive member of the core product line is the Nebula 3.6 Bundle (currently €79.00). This version supports most 3rd-party Nebula libraries, but read carefully before you buy one because some require either Nebula 3.6 Pro or Nebula 3.6 Pro Server (though such libraries are currently in the extreme minority). This version does not have the broad OS or 64-bit support that the higher-priced versions offer (it’s Windows only, for starters) and comes with 1 license instead of 2. It also comes with the less powerful N.A.T. 2 compared to N.A.T. 3 offered in other versions. The higher-priced versions also use a superior Nebula engine. While it was slightly difficult to sort out exactly which aspects sounded/performed/were supported better in Nebula 3.6 Pro vs Nebula 3.6, Acustica Audio’s comparison table mentioned both Dynamic Range and Multi Dimension Models. What makes up the “paid engine” products currently available in the Nebula range, leaving Nebula 3.6 Free and the standalone libraries off the AcquaVox engine. For this review, I spent almost no time with AcquaVox and the standalone library products that don’t require users to own a Nebula product. However, I can say that Nebula 3.6 Free appears to be largely based off the code base of the Nebula 3.6 Pro Server, but cannot load most libraries (not even the library bundled with – only ones specifically designed for Nebula Free – not unlike Native Instrument’s approach with Kontakt Player as opposed to the full Kontakt version). It comes with a much smaller library and a few developers provide patches that can be loaded into it, but Nebula Free might best be thought of a way to get a sense for the platform before spending money on it.
As a legacy note, while writing this article, Acustica Audio discontinued the Nebula 2 series (an inexpensive but both more difficult to use and less powerful version). This was a good move for the brand in my opinion since having too many products could get confusing for new users. On top of that, several libraries already sounded noticeably better in Nebula 3 (let alone 3.6 Pro or Pro Server) if they were even supported in 2 at all. In other words, the odds of people getting an accurate first impression are higher now.
Zabukowski Software’s Nebula Setups 2 and Understanding Nebula Files
Nebula libraries primarily consist of two types of content: Programs and Vectors. Programs are smaller files that contain parameters and metadata (much like “programs”, “presets” or “patches” on traditional samplers). Vectors are larger files that contain the actual sampling data that Nebula references during processing. Both types have their own named folder in the Nebula folder. Here are the default locations for the Nebula system folder on OS X and Windows 7 systems.
Windows (where “C:\” is the system drive):
If you’re using the default Nebula settings, you’ll notice that the more Programs you get, the longer Nebula takes to load. In addition, you may start to find that it takes longer to select or find the Programs or to organize them effectively. Different programs may also require different Nebula configuration settings to make the most of them (for instance reverb Programs often require a higher DSPBUFFER setting inside Nebula than preamp Programs).
Zabukowski Software (http://zabukowski.com/software/ or available for purchase at at http://www.acustica-audio.com/index.php?page=shop.browse&category_id=67&option=com_virtuemart&Itemid=53) has created Nebula Setups 2 to address all of these issues.
Nebula Setups 2 uses a graphic interface to create more optimized Nebula configurations. If you read the PDF called “Nebula Setups 2 Quick Guide” and go through the “Install” and “First start and basic setups” sections, you can start creating custom setups of your own.
Here’s how I created one optimized for Programs with a long tail (like some of the reverbs and delays).
1) I started Nebula Setups 2, went to the Tweaks menu and selected “Add Parameter”.
2) I scrolled down to the “DSPBUFFER – Internal Buffer Size” parameter.
3) In the value field I typed in “4096” to increase the buffer size to 4096 samples and clicked OK.
4) In the upper left window I selected the “REV (Reverb)” category.
5) I right-clicked on the category and selected “Add program(s) from selected category”.
6) I repeated steps 5 and 6 for any other category with long tails (in my case the “DLY (Delay)” and “SSP”).
[These categories came from programs from SignalToNoize.com and TimPetherick.co.uk but your own categories may vary.]
7) I went to the Setup menu and selected Save.
8) I typed “Long Tail – Nebula” in the Setup Name field.
9) I unchecked the “32 bit (x86) VST plugin” box since I was only using the 64-bit version.
10) I clicked “Save”.
The next time I loaded my DAW, “Long Tail – Nebula” appeared in the list of plug-ins. The reverb programs (like the free VNXT EMT 140 2.5 second one) ran much better than with the default buffer used in my standard Nebula plug-in. The plug-in loaded much quicker too, since the index included just 399 programs instead of well over 7,000 like the main Nebula instance on my system (which also made it quicker to find the reverb programs I wanted to use).
The main key to the way Nebula Setups 2 works is the XML file approach that Nebula uses.
One XML file is located in the same folder as the Nebula plug-in and share the name of the plug-in (for instance “Long Tail -Nebula.xml” in my example). This XML file stores several Nebula configuration parameters (like the DSPBUFFER that sets latency) and tells Nebula what XML file to look for in the “Setups” folder.
The “Setups” folder is located in the Nebula system folder XML file is paired with another in the “Setups” folder within the Nebula system folder mentioned earlier. This folder also contains an XML file with the same name (once again “Long Tail -Nebula.xml” in my case) but this XML file is an index of the Programs that Nebula will display when you load the plug-in.
At the time of writing, a release candidate of Nebula Setups 2 was available and functioned properly in my testing under both Windows 7 x64 and OS X Mountain Lion 10.8.4. [Update 2013-11-21: It is now available for sale at http://www.acustica-audio.com/index.php?page=shop.browse&category_id=67&option=com_virtuemart&Itemid=53]
Zabukowski Software’s NebulaMan
NebulaMan is another tool you can find at http://zabukowski.com/software/, or for sale in the Tools section at http://www.acustica-audio.com/index.php?page=shop.browse&category_id=67&option=com_virtuemart&Itemid=53, this one designed to audition in real-time or batch process Nebula settings onto various audio files. It allows you to create several different Nebula combinations (stack as many instances in serial as you want), provides the option to automatically bring input levels to Nebula’s optimal range and even provides the ability to link input and output level sliders if you want. All of this allows you to very quickly set up, compare and process large numbers of Nebula files without opening a DAW, along with a few additional tricks of course. The process is made quicker by being able to use shortcut keys to switch the active Nebula chain being used for preview playback at any time.
As a point of clarification, the batch processing is based around processing several files in a row with the same settings, not processing the same files with a lot of different Nebula programs in a row (though processing files with any of the handful of setups you have open at a given time is a quick process).
Depending on the batch capabilities already offered in your DAW (Reaper being fairly extensive in this area for example) you may not need the batch capabilities on offer here. But the fact of the matter is that NebulaMan makes the process very quick and very easy and the small tweaks (like settings levels into Nebula’s optimal range at the push of a button with the option of whether to compensate output or not) is a very nice addition indeed that may make the software useful to you nonetheless.
Organization and Optimizations
I love the sounds I can create with Nebula and I love the array of possibilities and that the pricing of the libraries and software is very fair, but there are still things that could use improvement. Note that I’ve included certain issues with 3rd-party libraries here, even though Acustica Audio obviously is not responsible for some of the choices that 3rd-party developers make.
First of all, organization. The current way the programs are tagged and organized is inconsistent and confusing. For example, there where I could not find a library I had just installed, either using the Nebula interface itself or by using keyword sorting within Nebula Setups 2. Here are some of the reasons for that.
1) Sometimes 3rd-party developers create entirely new categories for each library; sometimes they put their programs into sub-categories of existing categories (like “Pres”); sometimes they use a single folder for almost all their programs (like “HO” for Henry Olonga) or they may create a new category for some of their collections (like “CCC” for the “CDSM Classic Console” libraries). Often the category is not specified either on the web page for the library or in the accompanying documentation making it more and more difficult to find new libraries in the list as your collection expands.
Potential fix for 3rd-Party Developers: please always specify the category your library will install with, preferably both on the web page and in the documentation. As a user, I also wouldn’t mind a more standardized approach to category organization, but that may be asking for too much. 😉
2) If we look at the fields that Nebula Setups 2 reads, we’ll see categories, “Name”, “Description”, “File Name” and “Number.” You’ll notice that there is no dedicated field for “Author”, “Library” or “Website” so the description may include one of these – or it may include an actual description. If you installed several libraries at once (sometimes I would install 10 or more in one day), you may have to go back to look at the file names used in the install just to be sure what library the program you’re loading came from (let alone where that library came from). This is not just an academic point: I came across several forum posts online where users were trying to find additional libraries by the author of an old Nebula library… only to realize that they had no way of identifying the author of the free programs in question.
Potential fix for Acustica Audio and 3rd-Party Developers: Add more dedicated tagging/metadata fields (“Author”, “Website” and “Library” should be the bare minimum) and leave the description field for actual description. If a user likes a given Program, they usually will want to know where it came from, and it makes it much easier for them to buy more if they do. 🙂
Second, there’s the issue of configuring Nebula for different types of programs. Each Nebula plug-in format installs two versions by default: Nebula3 and Nebula3 Reverb. But most developers suggest using Nebula3 Reverb for all their programs, regardless of content type, for best sound quality. So far everything is fine and it seems like Nebula3 is primarily there for legacy support. But then we start to get into files that require or benefit from different optimizations: delays and reverbs with long tails (which require a higher DSPBUFFER setting than normal programs) or a handful of specialized compressors (like some of the best libraries from Gemini Audio). If you get Nebula Setups 2 then it is great tool for improving load times and keeping Nebula settings optimized for different types of programs, but the fact remains that the process is still not as easy as it could be.
From an ease of use perspective, it would be simplest if 3rd Party developers could specify optimizations within the Program file so that users didn’t have to do any optimizations themselves at all. But as multiple people pointed out to me while I was writing my review, some parameters (such as DSPBUFFER) will have different optimized values on different systems. I generally found myself using identical values on two wildly differently spec-ed OS X and Windows 7 systems since I was most concerned with minimizing CPU usage – running almost exactly opposite to the workflow suggested by SoundOnSound in an earlier review. In other words, I can see Acustica Audio’s difficulty here. Maybe they can come with an elegant solution that I can’t think of.
In the meantime, perhaps one workaround would be to at least include a 3rd Nebula instance (perhaps called “Nebula3 High Latency”) that would be configured with a 4096 or 8192 sample buffer, so that new users would be able to run long-tail reverb and delay libraries without changing configuration settings. Because the first time I tried to load some of those programs using Nebula3 Reverb with the default buffer, they almost completely failed to playback, sending me searching for the tweak that would make them run (which I eventually found to be the DSPBUFFER).
In fairness to Acustica Audio, a significant portion of the existing Nebula market seems to be power users that either like or don’t mind making these kinds of tweaks. The sound of Nebula is so good, however, that a lot larger market might enjoy using it if the experience catered more to new users.
Nebula and 3rd Party Developer Websites
The modern Acustica Audio website is a huge improvement from what I encountered when I bought Nebula 2 years ago. It’s friendlier, faster (download speeds were generally excellent), more visually appealing and easier to navigate. Nonetheless, there’s still room for improvement – something that could also be said of many of the 3rd-Party Developer sites as well. For the Acustica Audio website, the biggest improvements would be:
– [Updated December 1st, 2013] Originally I had written down “Further automating/improving the challenge and response process so that users can get their Nebula license within seconds or minutes instead of hours or days.” Of course Acustica Audio decided to improve that, so the automated system should now provide users with their code in “3 to 60 minutes.” In my testing, it took less than a minute. I’m glad to see the company get proactive about this.
– Providing clearer descriptions of different Nebula versions on the shop page itself, rather than relying on users visiting the comparison table to find out.
– Keeping the F.A.Q. more frequently updated with the information that the Acustica Audio team provides elsewhere on the site in their forum posts. [The current F.A.Q. is up to date at the time of writing, so this has hopefully already become a thing of the past].
– A clearly highlighted “Getting Started” guide for Nebula, possibly with videos.
– Having the splash page include rolling news updates in addition to the currently featured graphics of new products.
For the 3rd-party developers, here are a few suggestions that would help to better cater to a different breed of potential customers. [Updated November 21st, 2013] Acustica Audio has said they are moving to start hosting 3rd the third party libraries in a centralized shop. This could address many of the issues mentioned here if it comes to fruition.
– Make it easier to buy lots of things at once. I am not talking about doing deep discounts – heck, you don’t even have to discount at all if you don’t want to since the prices are already very fair. I mean make it possible to add all the libraries you sell at once to the shopping cart with just a click or two. In one case I had to spend what seemed like over 30 minutes adding libraries one by one to the shopping cart to buy everything from a given developer. So – if your content is good and someone just wants to buy all of it at once, make it easy for them to do that. If you want to do a discount too, that’s fine – but mainly just make it easy to buy a lot of things at once.
– Organizing your shop. With very few exceptions, the shopping categories were not as clearly organized as they could be. Issues included only allowing to sort by overly specific categories (as a user I would rather be able to see several libraries on one page than have categories so specific they include just one library) or almost entirely omitting categories at all. Since some of the developers use a blog style interface, it can also be difficult to get an overview of what programs are available. Any improvements in this area will be greatly appreciated by your new customers.
– More tutorials, video and audio examples.
A Note About CUDA
I’ll briefly mention CUDA support. Nebula’s CUDA support has been mentioned in the past (and still gets listed on Acustica Audio site in the Feature Comparison table for both 3.6 Pro and 3.6 Pro Server.) At the time of writing, CUDA support was not documented with sufficient consistency for me to recommend exploring it and many posts on the topic were discouraging. Contrasting answers were given on the topic on different parts of Acustica Audio’s site and there’s a general sense that this has not been developed to the point of being a marketable feature. This did not adversely affect my opinion of the product but I wanted to clarify the point for any readers considering the platform on the basis of CUDA support.
Providing any sort of a comprehensive look at the Nebula platform extends beyond the scope of a single review (even one as long as this one) so we’ll be going much further in depth with specific Nebula libraries and standalone products in our next review. But the most succinct way I can put it is that Nebula platform is one of the most exciting mixing, mastering, sound design and processing tools that I’ve ever encountered in digital audio. When it’s used to sample analog hardware it usually gets closer to the original sound than any emulation/sampling I’ve heard – and the array of libraries available for it is extremely compelling. Factor in that you could buy every library available for the platform today (even a la carte and without having to use a single discount or bundle) for less than the price of Waves Mercury or the UAD Ultimate bundle, and you have a very compelling challenger in this area. One of the artists I mix for (that has experience with both the aforementioned products) even says that hearing the sound of Nebula on being applied to her mixes is her favorite part of the whole mixing process.
It’s not for everyone of course (especially not people that need low-latency real-time effects while tracking or with slow CPUs, low ram and small hard drives) and there’s still room for improvement in terms of the user interface (making it easier to display and modify multiple bands of EQ; clearer and more consistent metering for compressors, etc.) but it really is one of the very best digital options available anywhere for adding vibe to your sound. And it’s getting better every day.
If you use solo strings in your music (or want to) then read this to find out if the Embertone Friedlander Violin for Kontakt could be the one for you. Our reviewer thinks it’s pretty special.
by Per Lichtman, Nov. 2013
What is Embertone Friedlander Violin?
UPDATE: You can now also read our review of the updated Embertone Friedlander Violin 1.5 at http://soundbytesmag.net/embertoneblakuscelloandfriedlanderviolin/.
Embertone Friedlander Violin (EFV) is a new Kontakt 5 Player solo violin library (fully compatible with the full Kontakt 5) that puts expressive playing first and range of articulations second. With close-miking and non-vibrato legato sampling (including slur, bow-change and portamento) on a level unlike any commercial sample library released to date (and I compared it hands-on to many others) the library extends further through the use of fully controllable vibrato, additional sustain samples and 8x round-robin. I possess or have tried most of the available solo string libraries hands-on (with the notable exceptions of the Spitfire Audio, Kirk Hunter and 4Scoring releases) and EFV is quite simply something different entirely. If you use solo strings in your music (or want to) then read on to find out if it could be the one for you.
Vibrato, Non-Vibrato and a World of Gradation
Embertone Friedlander Violin is ambitious in its approach to modeling vibrato. Normally, sample libraries include different recordings depending on the level of vibrato. For instance, non-vibrato, vibrato and molto vibrato recordings can be made and the user can be given control to cross-fade between them. The primary advantage of this approach is that the detail of a given performer’s vibrato is reproduced exactly as in the recording and can generally be implemented well in a variety of samplers. Two of the potential pitfalls are the potential for phasing and audibly obvious crossover points while cross-fading. In addition, there are limits as to the control of the vibrato (frequency, pitch variation and amplitude are all baked in for example). Embertone Friedlander Violin instead opts to give the user direct control over every aspect of the vibrato using either MIDI CC’s by using non-vibrato recordings combined with modeled vibrato created within the sampler. They also provide a control option for users of TouchOSC for iPad to let them dynamically control vibrato. A handful of products have previously attempted to give this sort of vibrato control for the violin (notably the discontinued Garritan Solo Stradivari Violin and Synful Orchestra) but both of them failed to provide a sufficiently convincing non-vibrato starting tone to use as a good starting point. Thankfully EFV succeeds wonderfully using close-miking that captures one of the most convincing non-vibrato tones for the violin in any library to date.
One of the nice things about EFV is how easy Embertone have made it to customize the controls. The multi-tab interface lets you change the mapping of any MIDI CC with a few clicks – though the sustain pedal and keyswitches have locked functionality.
Here’s my perspective on it: I have a real violin. I can barely play it, but I nonetheless choose to do so from time to time. In playing my own violin, it’s been my experience that the instrument is beautiful and musical, full of the potential for expression even without employing vibrato. By building EVF on non-vibrato recordings and then adding vibrato control, Embertone have managed to capture the expressiveness and breadth of articulations that have too long played “second fiddle” to their legato brethren.
But getting back to that vibrato, here’s the big question: how does that vibrato sound? If you are playing single notes and you are happy with every detail of the vibrato performance, then some competing libraries will offer a more authentic tone. Let’s just get that out of the way first. But Embertone lets me sculpt the performance of the vibrato in a way those do not: delayed onset of vibrato, controlled acceleration and deceleration of the vibrato rate, over a hundred gradations between non-vibrato and molto-vibrato, etc. I often have issues in other libraries with vibrato starting far too quickly after I move from note to note but with EFV I get to choose. Which is more important to your performance or recording is a personal decision: for myself, it varies but EFV makes the strongest argument for modeled vibrato control of any solo string product I’ve encountered to date.
In the current 1.01 release, Embertone Friedlander violin has 3 dynamic layers for the staccato samples and both normal and harsh sustains. The legato samples (all 3 types) are single dynamic layer.
Kontakt, Kontakt Player
EFV is a Kontakt 5 Player library: you don’t need to buy Kontakt to run it, but it runs properly in the full Kontakt 5 (and likely will in future Kontakt versions as well). I tested it using the newest build of the full Kontakt 5 to see how much editing was possible. Many Kontakt Player libraries choose to lock direct access to editing the instruments but EFV gives direct access to just about everything – except for the proprietary performance script. The script can be disabled entirely, if you wanted to for some reason, but I can’t honestly see any reason for that at this point. On the other hand, I did make use of the editing capability to try stripping away every single bit of Kontakt FX in instrument to find out just what the raw samples sounded like and how close the recordings were. The answer: EFV has some of the closest miking I’ve heard in a violin library and has opted to capture the sound “warts and all” of full bowstrokes without looping. Instead, as mentioned earlier, the library relies on alternating bowstrokes to continue a sustaining note if you keep the note held down.
Performance and System Requirements
EFV offers 16-bit and 24-bit samples in mono and stereo configurations. All of these are available in normal and “LoRAM” variants, with a different controller configurations preset for you. I did most of my testing using the 24-bit stereo samples, but the others use proportionally less RAM.
All my testing was done using a 7200 RPM hard drive. You don’t need an SSD to get the optimal experience out of EFV and I didn’t try testing one – though it based on my experience with other Kontakt libraries, it would be likely to reduce load times.
Sound and Mixing
The close-miking approach makes EFV uniquely well-suited to be used in a variety of venues outside the halls, chambers and soundstages of classical recording that are the bread and butter of most solo string products. While EFV is quite capable in those more common venues as well, it can also be mixed into pop, country, electronica and pretty much anything you can throw at it. It’s miked so close that once you disable Kontakt’s FX, you can throw any reverb you want on it without any identifiable sonic fingerprint of the space it was originally recorded in. You can even get closer to the instrument than VSL’s revered Silent Soundstage Recordings (which have been some of my most frequently use solo string samples for close to a decade).
Similarly, EFV is well suited to different kinds of post-processing. Once again, if you disable the internal Kontakt FX, you can throw on pretty much any of your own that you want. I tried everything from emulating preamps and consoles, to equalizers and compressors, to tapes and exciters and enhancers and was able to get good results without making recording noise or the acoustic venue more noticeably apparent.
Since I play violin, I would be remiss if I did not linger on timbre for a moment. The timbre of the Embertone Friedlander Violin is different than those in most other popular sample libraries: this is neither good nor bad, but it is important. I’ve talked to other violinists that have different violins for different applications (one for their solo work, one for playing in quartets, etc.) and one of the reasons is because different tones will work better for different applications. EFV has more overall body and a bit more resin and bow noise in the recordings of some of the competing libraries, and sometimes a bit less focus to the tone of the soaring notes. It does not sound like a Stradivarius but contrast in tone could create a positive differentiation against one in a quartet.
You’ll want to pair Embertone Friedlander Violin with a good reverb for best results – the included one won’t cut if for discerning users. Like most things, I found I got good results out pairing it with Numerical Sound FORTI and SERTI, especially since the latter offered smaller spaces for chamber music, but you can get realistic results with other products based on real acoustic venues as well. If you want unrealistic results, you could always pair it with an algorithmic reverb instead and it certainly is up to the task of performing soaring lines in dreamlike soundscapes should you choose to go that route.
Using With Notation Software
If you primarily use notation software as opposed to either using a DAW or performing virtual instruments live, you’ll have to do a bit more work to get things setup with EFV than with competing products like XSample Chamber Ensemble. But one thing you may appreciate is the use of the sustain pedal, which makes legato lines exceedingly easy to program (just insert pedal in and out events at the start and end of the phrase) in even the oldest and least expensive notation apps. Nonetheless, many composers using notation software will want a greater breadth of articulations than EFV provides – though the promised free update would help narrow the gap somewhat. In the meantime, there are several competing products that can cater to a greater range of articulations.
Articulations currently cover sustain, slur legato, bow change legato and staccato. An update has been promised to be delivered in coming months at no charge to existing customers that will add several more articulations, but in its present form it already demonstrates several key strengths in both sound and playability – even though the range of articulations is much smaller than libraries like VSL Solo Strings.
Embertone Friedlander Violin vs. Garritan Stradivari Solo Violin
Most people active in orchestral sampling libraries over the last decade will remember the 2005 Garritan Stradivari Solo Violin (GSSV) – the first sample library to attempt the sort of expressive control that Embertone Friedlander Violin strives for. Since that product was discontinued, a lot of mystique has built up around it (especially since it shares lineage with later critically acclaimed SampleModeling libraries) and since I bought the product years ago, I thought I’d compare it to EVF.
Quite simply, Embertone Friedlander Violin blows GSSV away in almost every metric. The EVF base non-vibrato tone is superior and more convincing (to be clear, I’m speaking about the specific recordings, not the acoustic instruments they were made from). The EVF legato transitions are more expressive in all modes (bow change, slur and portamenti). EVF offers full Kontakt access to instrument editing capabilities while GSSV offers none. EVF also offers pre-built menus for controlling or changing most parameters, while the most recent GSSV version came with almost all parameters (including key switches) locked. In addition, EVF automatically re-bows using an alternate sample at the end of a sustain, GSSV simply ends.
Basically, unless you need articulations that EVF doesn’t provide or are looking for a more ethereal tone with less body (or less expressive but rapid legato transitions), there’s almost no reason to use GSSV anymore. EVF really is the product that I had hoped GSSV would be originally and it has taken advantage of the advances of the last 8 years to make a much more expressive instrument.
If you want to do comparison of your own, keep in mind that GSSV needs to be run in Kontakt Player 2, not the newer versions, to get the most out of the library. So no there’s no 64-bit support for newer systems. EVF on the other hand is designed for Kontakt 5/Kontakt Player 5 so you won’t be able to run it on legacy systems (like PPC Macs).
If that’s not enough for you, here are a few more quick notes for comparison.
Configuration: the latest version of GSSV was closed system (especially in regards to assigning CCs and using different convolution). EFV 1.01 has great flexibility. While I’m not wild about the included convolution impulses, even they can be swapped in the full Kontakt 5 without much extra work (or disabled completely).
Dynamics: GSSV makes superior use of multiple dynamics and proprietary harmonic alignment techniques for matching samples. EFV could be improved through multiple dynamic layers in the future, though it was worth noting that I generally found the result of using it (even with the limitations of the dynamics) to be more musical than GSSV.
Attacks: GSSV uses the approach of blending a short attack sample with a different sustain sample that can be difficult to adapt to coming from other libraries. In addition, the recordings themselves were not to my taste. With EFV the dedicated staccato samples (3 dynamic layers with 8x round-robin) work much better and are easier to use. EFV also benefits from giving the user the choice of controlling the level of staccato notes by velocity or by the same CC used to for the dynamics of the sustains. In other words, you can use the same method for controlling the level of short and long notes if you want.
Basically, EFV does fewer things – but does almost all of them better, especially expressing legato sampling.
Embertone Friedlander Violin pushes the envelope in several key areas for solo instruments: legato, vibrato control, overall control and editability and mixing flexibility. There are certain things in Embertone Friedlander Violin that are so good that all other sample library developers should take note.
First, the approach to legato is truly expressive – it feels more like when I actually play the violin than the most common legato interval sampling approach with a real sense of momentum and flow from note to note, especially within optimal tempo ranges (a range greatly broadened in the “Full” program that use Kontakt 5’s Time Machine functionality) and including slurs, bow-changes and portamenti with similar sampling detail in the same program really helps. The fact that it can be used equally effectively with non-vibrato and vibrato without worrying about key switches is a nice bonus.
Second, the use of the sustain/damper pedal in legato mode here is so good is and so universally useful that it should become the standard way of programming string libraries from now on. Essentially, when playing a legato line, the sustain pedal causes each note to sustain until the next one is played – and if a note is repeated, a bow-change on the same note is played using a different sample (with a very natural sound to boot). Embertone did not invent this method (it had been heard earlier in Garritan Stradivari Solo Violin for instance) but they’ve perfected it by using either superior sampling or performances (it’s hard to tell from the finished product) to create better differentiation in the sound of the re-bows and a good sense of flow between them. This useful for a variety of things, including manually creating loures within a certain range of tempos without the need for a key switch or loading a different program.
Third, is the combination of easy options for quickly and easily configuring the
No One’s Perfect
With so many things to like, you may be wondering where the product falls short – both in comparison to other libraries and in its overall potential. The most obvious limitation is articulations. Compared to established competing products, like Vienna Solo Strings (Volumes I and II) or XSample Chamber Ensemble Solo Strings (available on their own or as part of the XCE bundle), and at times even the solo strings in East West Quantum Leap Symphonic Orchestra, the overall articulation set is much more limited. Embertone has officially announced the addition of several important articulations in a free update expected within the next couple of months (possibly sooner) that could increase the value of the library further. While the tremolo, sul ponticello, pizzicatos and additional sustain layers (as well as the custom programmed Con Sordino (string mute) effect applied note-by-note) would certainly increase the versatility of the instrument, I can only review that product if and when it arrives. Common sense (and years of industry experience) dictates that I not count my chickens before they hatch. Even then, if you need flautando/sul tasto, harmonics or con sordino performances, you’ll have to look elsewhere – I would start with offerings from XSample, VSL and EWQLSO.
As a counterpoint on the articulations front, I would like to emphasize that Embertone Friedlander Violin is the only library I’ve encountered to date that offers such powerful legato programs based around non-vibrato performances (that of course can be fully vibrato controlled as well).
Part of the Family?
As mentioned before, Embertone Friedlander Violin is the first solo string product from Embertone. They’ve announced a cello (Blakus Cello) and said that a viola is next in line, but neither have been released yet meaning that it’s important to know whether EVF “plays well with others”. For me, the answer was a resounding “yes”. While it takes a little bit of mixing work up-front, I found that Embertone Friedlander Violin was good at playing first violin in chamber music ensembles, as well as taking solo lines in an orchestral setting. It could be made to double other instruments or play second violin against other libraries, but this took a little bit more work and did not seem to play to the library’s natural strengths quite as much. Still, I did so several times with good results. But if you want an integrated set of solo strings right now, without dealing with additional mixing work to make them coherent, I would look to competing products.
Embertone Friedlander Violin currently sells for $110 U.S. If you broke down price of competing products on a “per-instrument” basis, you’d find several that are competitive with EVF, but the price of entry is higher. In fact, the only other interval sampled legato available for less than twice the price of Embertone Friedlander Violin is QL Solo Violin for $99. QL Solo Violin is a product with a timbre so poor that practically every demo for it has made me cringe and is a rare miss from the company whose EWQLSO solo strings I enjoy. In other words, Embertone Friedlander Violin is really competing with products starting at $250 and above – so there’s additional incentive to check it out if price is a concern.
EFV also features an Ensemble tab. Here you can engage or bypass the ensemble mode, which uses several samples from the instrument at the same time (panned differently) to create an ensemble effect. The mode lets you choose the total number of players from 1 to 9 and specify the stereo spread, timing and intonation range. There’s also a button to process the legato transitions together. In addition, the overall level is decreased whenever ensemble mode is engaged to prevent the patch from sounding louder just because more voices are playing – a handy bit of forethought.
The sound of the ensemble mode is both a useful addition to (and distinct from) the sound of string orchestra libraries, sounding (appropriately enough) as though it has been miked a little closer and benefiting greatly from the legato programming.
Layering With Other Libraries
When layering EFV in solo mode with other libraries, I consistently found that EFV brought the sound closer and granted a little more immediacy by default. When in ensemble mode, I found it worked especially well when I layered it with instruments from other solo string libraries. The staccatos are especially effective in this capacity – it’s easier to match staccatos between libraries than legatos and the 8 layers of round-robins are helpful for masking a more limited number of repetitions in another library. This is also one of the places where the close-miking once again pays dividends since the library doesn’t start to sound “too big” or have too much room, which are both problems I’ve had with certain other libraries using a similar method for ensembles.
So to recap, the highlights of the library are the expressiveness of the samples; the effectiveness of the programming; the fact that it’s one of the first libraries that doesn’t treat non-vibrato legato like a second class citizen; and the level of control available and also the uncommon amount of body and core that results from the miking. It’s a library based on dynamic control rather than pre-built phrases that begins with some great source recordings – but that’s still very easy to use and lets you customize just about everything to suit you. It doesn’t offer a comprehensive array of articulations – just a focused range of extremely expressive ones miked so that they can be used in any mix you want.
No matter what solo string library you already use, Embertone Friedlander Violin brings something a little different to the table – so if you’re shopping for a violin library it should be one of the very first that you take a look at.
More information can be found here: http://embertone.com/
It’s rare these days to encounter a new synth that does something you’ve never encountered before. We talk to the developer of just such in instrument, Daven Hughes.
by David Baer, Nov. 2013
It’s rare these days to encounter a new synth that does something you’ve never encountered before. It’s even rarer when your response upon first hearing the instrument is “Holy [word –of-choice-to-follow-Holy]!” But that was precisely my reaction upon first encountering Cycle, a recently introduced instrument by developer Daven Hughes who hails from Fredericton, New Brunswick. Even more remarkable than the stunning sounds that can be created with Cycle is the fact that it is the first synth offering from this developer. More information (as well as some very nicely done audio demo clips) can be found here:
But with innovation can come bewilderment when faced with unfamiliar technology, and with Cycle it becomes clear early on the “we’re not in Kansas anymore”. To be fair, at the time I’m writing this, Daven has yet to complete the on-line documentation, so to that extent, Cycle is a work in progress.
As can be seen from the image above (for a larger view, visit the web site referenced above), the GUI for Cycle is dazzling, but some may also find it intimidating. Things get even more so when reading through the help glossary and stumbling over terms like “vertex cube” and “intercept path”.
Fortunately, we’ve got the developer right here to lead us through the concepts behind Cycle, so let’s get down to business without further preamble.
Sound Bytes: Daven, thank you for taking the time to talk to us. Let me start with what may be a complicated question that hopefully has a less than complicated answer. Would it be fair to say that the bulk of the innovation that makes Cycle unique is to be found in the component most people would think of as the oscillator portion of a synth, but in the case of Cycle, that component also contains what would be the filter section of a more conventional synth?
Daven Hughes: It’s safe to say the method of sound generation is the most distinguishing feature in Cycle. Before I get into that, I’ll note that the rest of the synthesizer sections – envelopes, effects – are somewhat less alien, so sound designers will have a familiar place to start.
There are two ways Cycle creates sound. Similar to the classic subtractive synthesizer, it can generate sound with an oscillator. The oscillator is particularly flexible, because the user designs the wave shape with points and curves and so it can be as simple or as complex as desired. The twist is that this shape can also morph in a controlled way. To make a wave shape morph, Cycle lets the user define paths for the curve points to follow over time. This structure of points, curves, and paths is a bit like a wire frame, and the work flow of drawing wave shapes and creating paths is quite similar to 3D modeling.
The spectral synthesis component is the second way Cycle generates sound. As you suggest, this component also takes care of filtering.
Some spectral synthesizers let you operate on the spectral data of a sample, or use an image as the spectral source.
In Cycle, instead of analyzing a sample, it analyses the spectral content of the signal created by the morphing oscillator. This sets the stage for further spectral editing. That work flow of 3D design applies to the spectral domain too, but instead of wave shapes, you’re drawing curves that either multiply or add to the harmonic spectrum. Like wave shapes, these curves can also morph with time. Because you’re not tediously painting the spectrum with mouse-strokes, basic filtering is very straightforward – a low pass filter takes only two clicks, for example.
S.B.: So let me try to paraphrase to see if we’ve got this right. When you talk about drawing a filtering curve, you might be simply describing drawing a basic low pass filter response curve with a value of 1.0 below the cutoff and a declining slope to the right? This is then used as a multiplier across the frequency spectrum, acting on partial amplitudes? Also, resonance would be obtained by adding a “hump” in the curve at the cutoff point?
D.H: Yes, it’s as simple as that, with one detail: filtering curves are neutral at value 0.5 because they are bipolar, so the lower half attenuates and the upper half amplifies. That way a tall spike in the curve will boost the harmonics a lot. If the curve is in additive mode, the neutral point is at 0 and any part of the curve that is above that adds to the harmonic magnitudes.
Now, keep in mind that there are two halves to the frequency domain because each harmonic has a magnitude and phase. The Spectrum Filter has a separate mesh for the phase spectrum. With control of the phase spectrum you can get many great effects – you can easily widen the stereo image, create breathy evolving pads, or make a sound more organic.
Graphically, Cycle presents the phase spectrum over time unwrapped. Phase unwrapping means when a harmonic’s phase drifts, say from 0.96 to 0.99 to 0.02, Cycle knows that the phase didn’t suddenly jump down by a large amount, it went up slightly and wrapped down, so then it can restore the true phase, which in this case would be 1.02. With unwrapping, many patterns become clear; without it the phase spectrum over time is chaotic.
S.B.: In your online information about Cycle you speak of utilizing five dimensions in the construction of the output wave forms. I gather that time is certainly one of them, and then curves have two dimensions. Right so far? And if so, dare we delve into what the other two dimensions are?
D.H.: Part of the design challenge was to find a way to make sounds expressive. With just the ability to morph wave shapes over time, Cycle would barely improve on a sampler with a single sample mapped across the keyboard. Of course, the content of the synthesized sound might be spectacular, but it would be as limited as an audio sample in terms of expressiveness and keyboard range.
The solution was to allow the wire frames, those defining the time-morphing wave-shapes and spectrum filters, to themselves morph. So, the two other dimensions are key scale, so that you can tweak the wire frame to maintain the naturalness of the sound across the keyboard, and modulation, so that some range of expressiveness can be modeled. Think of how a piano note sounds mellow when struck softly and brilliant when hit hard and all the gradations between. This is what you can do with the modulation dimension.
S.B.: I’m not clear on what you mean when you use the term “wire frame”. Can you give us a simple example of, say, what the wave-shape frame consists of? Also, how many wire frames in total are we talking about here?
D.H.: There are three wireframes involved in sound generation — one for the wave shapes, one for the harmonic magnitude domain, and one for the harmonic phase domain. I think most of us have a mental image of what a wireframe is in 3D design and animation – it’s the set of vertices and connecting lines that make the skeleton of some form, some manifold. In Cycle-speak, I call this structure a mesh.
We’ve established that time is one morphing dimension and that each curve point follows a path over time. Say the ends of this path are points A and B.
Now let’s introduce morphing along key scale (i.e., simply stated, low notes to high notes). Instead of A and B being fixed points, they each have a path over the key scale, just like the curve point has a path over time. At the low end of the key scale, A and B can be in one position in the mesh; at the high end, they can be in another. At somewhere in between, the points will be a weighted average of the two fixed positions, the weight depending how close the current note it is to either end of the scale. When you do a glissando up the keyboard, the time-path transforms from one configuration to another.
There’s yet another morphing dimension, modulation. So let’s put it together: a curve point is on a path over time, and the ends of that time-path are on their own paths over the key scale, and the end points of those key-scale paths are on paths along the modulation range. Ultimately there are 4 configurations of the time-path that are morphed between as the keyboard note and modulation source change.
This is a bit like explaining how to tie a shoelace over the phone, so check out this diagram showing the morphing process:
S.B.: OK, those diagrams help make all this a lot clearer. Can we now turn our attention to the user interface? Give us a tour of the main areas.
D.H.: Let’s start with the Time Surface. There’re two panels that show two views of the waveform surface: a 3D topographic panel for editing the waves shape paths with respect to some morphing dimension, and a 2D shape editing panel for editing the wave shape itself. Above the 2D wave shape editor, there’s a thin magenta band. This is a helper to identify where the sharpest points of the curve are.
It is important to realize that the 2D/3D panels are just different presentations of the same mesh. As such, certain things are linked between them, like zooming and selected vertices. To the left of the editing region there are controls for the current mesh layer.
Likewise, in the Spectrum Filter, there’re two panels: one for editing the paths and one for editing the curves that follow the paths. Clicking the magnitude/phase buttons switch between the two modes and between the data that is visualized in the windows.
Above the Spectrum Filter is the envelopes section. To the left of the envelope graph are the controls to set sustain and loop points. Envelope curves are morphing, much like the wave shape and spectral curves, but the 3D envelope panel to edit the paths is hidden by default. When you select Key or Modulation as the displayed morphing dimension, in place of time, the 3D envelope editor is displayed.
On another tab behind the 2D harmonic spectrum window is the Deformer panel. Here is where you can create a detailed 2D curve that “deforms” a curve-point’s path, making it more intricate. Usually a path is just a straight line that connects the start and end points, but with a deformer, the path can be anything from an exponential curve to a noisy wobble.
In the upper middle part of the UI, here are the Position sliders that are important to the morphing workflow. There are three sliders–one for each morphing dimension, and in combination with the mouse position their value sets the 5-D cursor position. When you select a vertex (right clicking), the one closest to this cursor is chosen. So for example, if you wanted to adjust a curve at the high end of the key scale, you’d set the key slider position to max and do the edit; the vertices moved would be those belonging to the upper key scale configuration. [Refer back to the earlier morph diagram to see this illustrated – Ed.].
Below that, the Vertex Parameters panel displays the values of selected vertices averaged as a group. These can be selected vertices of any of the several vertex-based editing panels. In addition, it lets you assign deformers to the different parameters of the selected vertex cube. For example …
The effects tab sits behind the wave shape editor… Here are the wave shaper, impulse response modeler, unison, delay, EQ, and reverb. Thankfully they’re all pretty straightforward. Just a word about the impulse modeler: it’s not a reverb unit, but more along the lines of a tube amplifier effect. Impulses do not morph, at least, not yet!
Then finally there’s the Preset Browser tucked away in another tab behind the wave shape editor. In the future there may be many hundreds of presets, but navigating these should be easy with the browser. Just type in a keyword or two and hit enter.
S.B.: So how does this all come together in sound design? Can you talk us through the process of building a preset? [Ed. note: I sent Daven a single sample I cobbled together from presets in … well, let’s just say another synth … as grist to the Cycle preset mill. This is what he’s talking about in the next section.]
D.H.: I haven’t introduced layers yet, and they’re important to the sound design process. When I said there are just three wireframe/meshes, I lied. Each of the three oscillator domains actually has unlimited layers, and a layer is basically a mesh and some other properties like pan. Harmonic magnitude layers are special and have a couple more properties – dynamic range and mode (additive or subtractive).
So let’s jump in. After loading the reference sample, the first thing I notice is there are two patterns in the time domain.
Most likely, this is the synthesis of two detuned oscillators or sound samples.
After tweaking the wave-pitch envelope to straighten the patterns out, we can begin tracing them in the wave shape editor. After roughly outlining the first pattern, create a second Time Surface layer. I eyeball the wave shape of the second pattern and draw it. It’s phasing downward, indicating the voice is detuned flat. Time Surface layers cannot be detuned, so to replicate this I drag down the rightmost vertices in the 3D editor simulating a moving phase. This isn’t quite the same as detuning, as the phase drift is limited to the duration of the preset, but it’s close.
The high-frequency details of the wave shapes are hard to trace, and further, they do not scale well with key scale. Frequency domain patterns scale more naturally, so I’ll let the spectral filtering fill in these details. To that end, set the first harmonic spectrum layer to additive mode, add some peaks in the 2D editor to mirror the spectral peaks of the wave sample spectrum.
Auditioning some notes, I notice that high notes are far too bright compared to low. This is partly a consequence of the time-domain approach. To counter this, I’m going to add a filter layer that will boost the treble at the bottom of the key scale and dampen it at the top. So I add a harmonic magnitude layer and set its mode to subtractive.
Do these steps to create the filter in the 2D spectrum editor:
1. Define a mild filter roll off (should take just 2 points)
2. Deselect key linking
3. Move key position slider to its minimum
4. Drag the curve points to boost the high frequencies
5. Move key position slider to its maximum
6. Drag the curve points to reduce the high frequencies
Test some notes and see the effect. If it’s not enough, you can adjust the curve again or increase the dynamic range of the layer.
With harmonic magnitude layers, order matters. A subtractive layer filters all layers underneath it, so in this case, the additive harmonics of the first layer are being filtered by the subtractive, equalizing layer. This is not quite what I want, because I only want the wave-shape harmonics to be corrected over the key scale; the spectral details I added in the first layer are fine. So I want to move the subtractive layer down, and I do that with the up-down layer buttons in the layer controls area.
The attack of the sound is distinctive from the rest. We can see this visually in the waveform surface and the spectrogram, a chaos at the start. A study I read showed that the attack is one of the basic elements that give a sound identity, comparable to the harmonic spectrum, so it’s important to get this right.
First we can use deformers to create perturbations in the wave shape in the attack:
1. Click the deformers tab (next to the 2D spectrum panel)
2. Draw an impulse-like curve, something that oscillates and decays away quickly
3. In the Time Surface, select a vertex at the time-minimum position in the Time Surface
4. In the Vertex Parameters panel, assign the deformer to the selected vertex’s amp property
5. Add another deformer channel by clicking the ‘+’ button in the controls area, and repeat 2-4 for another vertex
The harmonic phase spectrum is also great for creating this sort of chaos. Hop into the phase spectrum view and in the 2D editor create a couple of points around the middle of the spectrum. Between these points we will assign a deformer to the shape of the curve itself. This keeps the mesh topology simple and makes it easier to work with a complex curve. So make some sort of complicated shape in a new deformer channel, select the leftmost vertex and assign the Freq-vs-Phase property to that new channel.
To complete the effect we must make sure the changes we made only affect the attack. When there’s a Freq-vs-Phase deformer assigned replacing the default curve arc, the curve sharpness parameter behaves like a volume multiplier for that deformer. Since the sharpness parameter is also deformable, this means we can dampen the deformer-curve with yet another deformer. A quick decay curve is what we want here, so create the deformer channel and make the assignment to the sharpness property. The result will be the rapid distortion of the middling phases, creating a sort of noise.
To finish up I’m going to use a scratch envelope to improve the phasing effect of the second wave shape layer. With a scratch envelope, I distort the timeline of the mesh with another curve. Since envelopes loop, I can sustain the phasing effect forever instead of it expiring after a short time. In the scratch envelope, create an inverted hockey-stick shape (see figure), select the middle vertex and click the ‘loop start’ button.
In Cycle version 1.1, different scratch envelopes can be assigned to different layers. The looping effect is irrelevant for all layers except the second in the Time Surface, so for the others set the scratch channel to null.
That’s it! Hopefully you can see how the sound design process is quite measured and linear. With a bit of practice, something like this sound can take 5-10 minutes to get down, where it may take an indefinite amount of trial and error with other tools. See what you think of the result. [Ed. note: the original sample and a brief clip of music using the Cycle preset constructed by Daven follow.]
S.B.: Well, as I said earlier, clearly we’re not in Kansas anymore. Cycle sound creation seems to be a process unlike anything I’ve ever encountered. So, how did all this happen? Where did Daven Hughes come from, both musically and technologically, that resulted in such a unique device for sound creation?
D.H.: I found FL Studio when I was in high school and tinkered with it constantly. I thought it was a fun challenge to keep everything synthesized, plus I was broke so I didn’t buy any other synths or sample sets.
So that involved a lot of sound design. Always I’d be referencing the waveform output to the sample I wanted to imitate, but it was baffling me how to get the waveform right from tweaking a dozen different synth knobs. Some synths could draw a wave shape or import single cycles, but the shape was then impossible to modulate, and it often sounded too gritty to be useful.
Soon it was clear there were classes of sound that were very troublesome to synthesize – those complex in wave shape and those varying with time, and almost all real instruments fall into this category. Basically, the idea was to create sound from geometry rather than trigonometry. It came from the need to bring those classes into the sound-space accessible to a synthesizer, and do it in a way that wouldn’t tax one’s patience egregiously. I think the measure of a synthesizer is not what it can do “in theory” but what it can do in 10 minutes.
Once I thought that converting images into sound was the way to go. Not by using the image as a spectrogram (as had been done before), but rather using it as the wavetable with each pixel column as a cycle. It turns out that our ears prefer much different patterns than we see visually, so the results were typically unpleasant. Still, that experiment inspired a more visual approach to sound design.
Part of the reason I went through for computer science was to develop the idea. I tried to integrate it into the degree wherever possible, like the curve design was something I did as a thesis project. So conceptually, Cycle was a long time in the making.
S.B.: Daven, thank you for taking the time to talk to us. We wish you the best success with Cycle.
D.H.: It’s been my pleasure.
UVI developed the engine for MachFive sampler. Encouraged with that success, they decided to extract the reverb component and offer it as a stand-alone plug-in. Learn more about it here.
by A. Arsov, Nov. 2013
UVI is a well-known sound library and software developer. They prepared the engine for MOTU MachFive 3 and they have also created a good number of first class sound libraries. UVI is the sound, and the sound is UVI. Encouraged with the success of UVI Engine, they decided to extract one if its components, the reverb (the one used in MachFive 3), improving the sound, design and other coding magic, and released it stand-alone.
The main intention was to make a user-friendly reverb, with just a few essential controllers that should work in most situations, be it solo instruments or full orchestra … a Ford Model T, if you will … a reverb for everyone. Sparkverb has got a sound and it is made by UVI, so it would be unusual that it didn’t sound good. It also has a few (we could call them almost revolutionary) solutions, which I’m sure will be copied over the time by other developers. But, if I may say, they really aimed it for general use, for musicians that are not willing to care about technical things. So how is it that they did not add more specific instrument oriented effects, like guitar, solo string and the like? Also I spoke with the programmer and we didn’t share the same opinion regarding the absence of pre-delay controller. To me, pre-delay is a lifesaver in extra crowded arrangements, preserving snappiness and clearance while still adding an additional amount of air around instrument. The main developer told me that their intention was to make Sparkverb as a simple and user-friendly reverb, motivating them to avoid the implementation of the pre-delay.
I was not very enthusiastic regarding that issue, but Sparkverb sounded good enough that I decided to give it a second try. To tell the truth, it was not such a hard decision. Sparkverb brings a nice dash of a fresh air in a virtual reverb world, offering some attractive special features, and in most cases it sounds better than many of the other reverbs that I own or have tried. Yes, there were still a few occasions that I really missed pre-delay and I truly hope that they will add that little knob in some future update. But generally, in most cases it is not hard to find the right preset where drums sound snappier but still rounder at the same time. I’m not so sure how to describe this effect, but when I tried Sparkverb on drums and vocal, I discovered that it can make the sound to shine. OK, this is not that unusual, but at the same time it somehow tames the sounds, making them more equally present in a space. It is hard to describe, but with Sparkverb all my back vocals sounded a bit more defined in within the soundstage, somehow less jumpy, as if they were slightly compressed, or perhaps we should say equally dispersed. The same results happened with drums. They sounded slightly better mixed with Sparkverb. It also worked really well with many other instruments. I just couldn’t set it right for the solo violin in a wild Irish-like arrangement, at least without pre-delay. Since my aim was to make violin aggressive, proudly standing in the front line, I simply couldn’t achieve a wet, “attacky” sound without putting reverb slightly in the background. OK, I presume that you can’t have it all, and it is hard to complain as this is the only case where Sparkverb didn’t shine.
I was initially concerned about a scarcity of to-the-point presets, but the truth is that Sparkverb still offers very large number of presets: plate, hall, drums and vocal. So you will definitely not be swimming in a dry pool this time. You will easily find the right one in little time. The best thing with Sparkverb is that you can lock any parameter, preventing the value when you are browsing through the presets or when you press the dice picture, randomizing all other parameters. So, using it as an insert effect is a matter of second, lock the mix value at the desired amount and off you go.
The next innovative and fun-to-use feature is the Preset Voyager, something that you might see in sci-fi movies where everything is a matter of a click and space just opens. Preset Voyager is a window that you open by pressing the net picture in the upper row, a window sparkled with a colorful dots where every color presents a different category of reverb. So pressing empty space between dots creates new combinations and after a minute or two you can find your ideal preset for the instrument you are looking for. A tweak or two and the preset is ready to save. I found that some weird vocal combinations proved to be the best ones for my drums, so I’ll just leave you to be surprised finding the combinations that you would otherwise never think off.
Leaving the Preset Voyager window, you are back in the default one, in which you can see the real time spectrum display. I didn’t found this especially essential, but it is really beautiful and at least it shows you the general amount of the reverb and it can’t hurt if you see which frequencies are the most used in particular patch or preset. But what is far more important is that with a single click you can draw through the window, changing some parameters on the fly – very handy, original and almost essential. With one drag, you can change the shape of a main decay time or a low and high frequency decay. It depends where you dragging the mouse – change the position and you will change the cross-over between the aforementioned controllers. I have to admit, it couldn’t be simpler.
Another nice feature is that the Room Size parameter covers all sizes, from ultra-small to big hall. So you don’t need to change the general settings to get the whole size spectrum. There are also a few other capabilities that can turn the reverb in a creative monster, like freezing function which freezes the tail. And speaking of the tail, as you probably know this is the area where reverb can shine or fail. Sparkverb has a really nice musical tail – it never sounds artificial.
All in all, Sparkverb offers a nice selection of essential controllers, all presented in one window, so there is no need to dig deeper to achieve competent results. It sounds better than most of the other reverbs, (or at least as good as the best ones) and it is very CPU friendly. Containing, as it does, some almost revolutionary solutions and offering impressive numbers of one click solutions, paying a 199 USD for such a nice thing is not a bad decision. If they add that pre-delay controller in some future update, then only the sky would be the limit.
Visit UVI, watch the video, listen demo audio clips, download demo and have a good time.
By the way, you will not need an iLok key (as it is the case with most other UVI software) to use and abuse this little Sparky fellow. (you can chose between iLok or machine activation)
More about Sparkverb on a
FYI…. Uvi updated Sparkverb to version 1.1 and finally we got pre-delay. Now, only sky is the limit.
Thanks Uvi, you did a great job. As always. 🙂
Symphobia 2 is a cinematic library, basically an orchestral library adapted to cinematic needs. But our reviewer thinks that Symphobia is a pure cinematic Pit-bull; It bytes every time when it hears the word “cinematic”.
by A. Arsov, Nov. 2013
Are There Any?…
Are there any cheaper orchestral libraries around? Yes. Are there any better sounding orchestral libraries on the market at the moment? Yes. So, what is the point than with this one? Why should someone bother buying Symphobia 2 by Project SAM? The answer is pretty simple and you can easily find it on a library site or if you have purchased boxed version, right under the Symphobia name. It is a Cinematic library, basically orchestral library but totally adapted to cinematic needs. A plenty additional articulations and effects and similar add ons along with basic articulations that are also adapted to the nature of cinematic music. To be honest, Symphobia 2 is a pure cinematic Pit-bull, It bytes every time when it heard the word cinematic.
Not sure if you are aware, or if you thought about that, but cinematic music is buffed with specific phrases, or we can even call that rules that make a big gap between classical orchestral music and cinematic music. They both use same instruments, same tools, but the end result is not so equal. You simply can’t go shopping with a Lamborghini and you can’t race with your brand new Mercedes CLS, no matter that both cars have a wheels and engine.
Attack Is the King
The main issue with cinematic music is attack. There is a lot of fast passages, dramatic moments where staccato violins should fly like a bunch of a mad bats and cellos and basses should bark, also there is a lot of moments when instruments should byte in attack phase continuing with a long legato note with long decay and moderate release. If you ever tried to get this results with normal best sounding, cheaper or whatever we mentioned in first sentences of our review., library, you will soon release that this is not an easy task. Programming, programming and plenty of over-layers.
To use well known Microsoft terminology, Symphobia 2 is “plug and play” cinematic library. Dangerous staccatos, thrilling orchestra patches, sinister basses along with nice set of opposite articulations, lush strings and various legato instruments. ( but if you are serious with your cinematic intentions, than you should consider buying Lumina from the same developer, to cover all aspect of more lyrical moments.) Tons of various orchestral effects that are almost impossible to recreate without having an orchestra from flash and blood. Effects that you can hear in many blockbusters. If we consider some additional, essential non orchestral cinematic sounds, hits textures and pads along with effects, that are also packed in Symphobia 2, we got the most up to the point cinematic library at the market at the moment. To make a picture a bit more clear, let’s walk through the various section of the library.
Before I even throw my eye on instruments, I went directly to the multi section of Symphobia 2 Kontak library. A 21 of multi combinations, where different instruments, articulations, or parts are ranked through the keyboard, ready to use and abuse. Many of them will brings you a tears of sweet memories from the time when you first time see your favourite horror movie. Scary, scary and even more scary. Other brings action and tension, some of them sadness, excitements, festivity or sometimes a combination all all described, all in all they nicely cover the majority of moods of the whole library and are a good starting point for making a cinematic score. Maybe we should point out that Symphobia is not just a string library, it is full orchestral library with some special additions, so various percussion instruments, some of them with impressive attack and strength are also included, same for the wind and brass instruments.
Highlights in Pieces
In first instrument sections, called Full Orchestra we could find various combinations, orchestral chords ranked in major and minor combination or string with woodwinds, short hits along the long textures, dreamy textures or effects, stacks and other exotic things. Every instrument you load have additional articulations accessible through the key-switches or simply by pressing the articulation name in the mid of the instrument window. Nice and easy, in a past I was forced to learn where on a keyboard is particular articulation, while there you can just use the golden rule “press and play”. ( and not “plug and pray” as many pc cards used to behave during the installation process way back in a first version of Windows XP) So, if you need another articulation on a different midi channel, just load another instance of the same instrument, select articulation and midi channel, and there you go. Of course another instance of the same instrument doesn’t use any additional ram.
The next section is “Individual Sections” where we could find all instruments that are in orchestra divided in a groups. Trumpets, horns, flutes, Cello with bass effects, various strings in various string combinations with effects and even instrument, or better to say preset with some string phrases where Orchestra play some slow scary transitions that would be impossible to recreate with programming. Most of the Symphobia presets, instruments or articulations, no matter how we call them, don’t use enormous amount of ram. Most of them use under 100 MB, so you can go totally bonkers combining various elements from the library.
Legato Elements section offer legato versions of some single instruments along with ensemble instruments. We could call this section a melody nook. Nice place to spice the cinematic arrangement with some lead lines or eternal melodies.
Section called “Distorpia” is buffed with “out of this world” sounds and effects representing fear, paranoia, madness and all similar crime in the city goodies. With Distropia you can make audio version of Ring I and Ring II. You don’t even need any text for that, just pure sound of horror that you can find in this section.
The last one is a Miscellaneous section where you can find an Orchestra tuning preset along with concert hall noise.
Symphobia 2 is not so cheap library, but if you are serious with your business, that you should consider this one as an essential cinematic tool. Action, horror, war, crime.. you have them all inside one package. 749 Eur or 999 US dollars will buy you a ticket to cinematic heaven.
More info along with some video clips and audio demos could be find on Project SAM page.
Need a beautiful dreamy female voice that is controlled through your keyboard? Maybe Shevannai is just what you’re looking for. Find out more in this review.
by A. Arsov, Nov. 2013
This is a voice library, at the moment – the best one on which you can spend your money. A beautiful dreamy female voice is controlled through your keyboard. With a little programming effort (if you are not a skilled keyboard player) you could safely offer 149 € to anyone who can recognize that you haven’t recorded your song with top notch lady vocalist. I’m cooperating with several different vocalists, even with one opera diva, but having the luxury to have one at the reach of the hand, operating it and not cooperating is a special bonus. Library is intended to appeal mainly to movie composers, but it is very handy addition to any dance, trance or pop composers arsenal, adding a human voice to your instrumentals.
The library is divided into three main sections: The Voice, Phrases and Soundscapes. Legato, the first patch in Voice section, contains five different legatos switchable through the lower keys, each of these legatos, which use different vowels, is also offered with 33 different words that you can combine with those essential legato vowels. Maybe “word” is a bad term (although it is used in the manual) for those samples. Those are mainly syllables finished with vowel. Mee, Sha, Ra and so forth. So, they are not aimed for building actual sentences, but they greatly build the atmosphere using a voice like an instrument. This is not so uncommon a practice in modern production, so nobody will notice that something non organic, virtually compiled lies beneath the voice line.
The only problem is that you will need some additional practice to master voice programming as the omen est nomen – those patches are legato, meaning that they sound best when they are played legato, overlapping the notes, and if you want to change the syllable you should interrupt legato playing, as overlapped notes can’t have a different syllable between them. (Nope, it is not Eduardo’s fault, it is the nature of legato technique) and secondly whenever you release the note reserved for those syllables, default legato vowel overtook the command, so playing the lead vocal line with this library is like playing the old Organ, constantly changing the registers on the fly. It is a bit tricky, and at first I couldn’t figure how to deal with that (except recording it in loop, lead line first and then “words”). So, I took my time watching over and over the video clip that is on Bestservice site, to figure how Tarilonte did that, and after few minutes of practice I finally came close getting some decent results. (You should consider that I’m a guitar player, playing keyboard like an old monkey). All in all, it is not so hard and the end results are quite rewarding. I should admit that my first song with this library was made just with basic vowels and even using only that basic voice. The vocal line sounded impressive and absolutely realistic. I forgot to mention that vibrato and expressions work like a charm. No objections.
The next patch is a Staccato, where we got have sets of syllables, short and longer ones along with crescendo. As you can presume, there are no legato problems, so everything is as it should be. Same results as legato: ultra realistic and quite impressive. In combination with legato patch, it is a win-win situation.
The last patch in Voice section is inhales and releases – so real that the sound is a bit creepy. When I tried the library, my kids left the studio during the inhales and releases as it was too scary for them. It is not the general intention of that patch to scare people, but when someone who is not in a room with you breaths all around you, it can be a bit uncomfortable. I think that Eduardo surpassed himself with this one.
The Phrases section contains several patches: Melodic phrases in different keys, Spoken phrases, Whispered phrases and Short whispered phrases. In combination with voice section you can make wonders, all phrases are on a same, very high level. Everything is spoken and sungd in a nonexistent language, but thanks to Lara Ausensi, who gave her voice to this library, every phrase is so persuasive that you can trust to every word you hear, no matter that you don’t understand what she says. That is how it goes in an Elf world.
The last section contains a nice selection of soundscapes that offer you mystique: a set of voices, a few out of this world pads and two or three Hammond in heaven-like instruments. Anyone who knows Tarilonte’s previous work, knows that he is quite proud on his pads and atmospheres, and to tell the truth, he has every right to be proud of them. Every preset sounds so airy, full and dynamically versatile that we should take our hats off when we step into this holy soundscape space.
All in all, this library costs as much as any average virtual instrument, but for that money you get your personal live vocalist. To me, this is priceless. Shevannai brings a real human voice in your production, it is not one of those “sounds almost like” or “pretty damn close” sort of libraries. It is as real as a recorded real vocalist could be real. It sounds as if your work was done by one of the best worlds recording magicians. So, it is just a matter of question whether you need such thing or not. I definitively need it. I even plan to use it along with my vocalists. Human voice was and will ever be the most impressive instrument known to the human race.
149 € / 159 $ and 2.5 GB of your hard disk is all you need. More info, one video clip and many audio clips you can find at
Mark of the Unicorn’s Mach Five sampler has been around for a number of years now, but it was only last year it received an update, See what’s on offer in this close up look.
by Warren Burt, Nov. 2013
Mach Five Three – the All-in-one, with Elegance
Mark of the Unicorn’s Mach Five sampler (Mac and PC) has been around for a number of years now, but it was only last year it received an update to version three. The update was more of a complete makeover, and the dozens of new features that have been added catapult it into the position of the top three or four software samplers available today. It’s a strong contender for the number one slot, but in the software world, things change so rapidly, I don’t want to make predictions like that. Suffice it to say that the software is a highly efficient, pretty easy to learn BEAST, and it really can lay a claim to be just about the only sampler/synth that you would need. It’s absolutely serious competition for Kontakt5, EXS24, or any of the other flagship samplers out there.
First of all in the features list is – ta dah! – a printed manual. One can actually work on screen with the sampler and have a paper document next to you to refer to things. No more switching from one window to another! And you can actually learn about the software without a computer present! This kind of innovation in documentation is one that more manufacturers should consider. (grin) The manual is clearly written, and if there is a bit too much “see chapter 3-5, p. 46” for my taste, the software is so complex that I don’t know how else a manual for it could be written.
Last year I was considering getting a “real sampler,” as opposed to a sample player, such as Garritan’s Aria, for which I was writing my own sfz files. I wanted a sampler with more features which offered greater ease of programming, more options for sound treatment, and of course, the ability to micro-tonally tune it and have possibilities for scripting. The promo for Mach Five Three promised all this and more. I then ordered it early in the year, and after the obligatory four month wait (Australia is at the end of a very long and leaky supply line for many things technological – I just waited three months for a windscreen (!) for my Zoom sound recorder!), the software finally arrived. It required an iLok, and sure enough, a week before the software arrived, I lost my iLok. So there was more delay while I got a new iLok and renewed all the licenses on it.
Once all the problems were solved, it was very easy to install, and because of the iLok, I tried installing it on a number of computers to see how it would fare: A Toshiba i3 laptop with XP, a Dell Centrino Duo with XP, an ASUS 101 netbook with Atom processor, running XP; and an ASUS Vivo Tab running Windows 8. The software worked fine on all of these. CPU percentages varied from computer to computer. I made a patch using the “Organ” oscillator, the most CPU intensive module in the software, and ran a random composing program with Algorithmic Arts’ ArtWonk that generated many overlapping notes per second – a real torture test for the software. The average CPU percentages for this were Toshiba: 20%; Dell: 32%; ASUS XP netbook: 55%; ASUS Vivo Tab: 48%. In standalone mode, using each computer’s internal sound card, sound on the Toshiba and ASUS XP netbook was flawless. There were some breakups on the Dell, and more breakups on the ASUS VIvo Tab. Running as a plug-in under AudioMulch, and using an external sound card (Roland UA-4FX) sound was flawless on the Toshiba, Dell and ASUS XP netbook. There were still a few breakups on the ASUS Vivo Tab, but adjusting the latency to 4096 and using an ASIO driver for the UA-4FX eliminated those. The conclusion here is that the more CPU power you can throw at Mach Five, the better, however, if you have an older and slower machine, it will still work just fine with lower polyphony.
The main purpose of the program is to sample, and there are several types of sampling available. In the program these are called Sample, Stretch, Slice, Ircam Granular and Ircam Stretch. Sample is the traditional sampler, and you can easily do bulk loading and mapping of a set of samples very easily, although the method is a bit different than is described in the manual. All kinds of layering, looping, crossfading, etc. are available here. Stretch uses granular techniques to play the sample at the same preset tempo at all pitch levels. If you get the granular settings right, it can be very effective. Slice is a beat-slicer, and you can set the slices automatically, or you can let it do the slicing for you. There are a number of controls to control the fineness of the slicing and once you have a slicing you like, you can do a one-click mapping of the slices to a MIDI keyboard. It can also save a MIDI-sequence of notes which reflect how the slices can be played to reflect the rhythm of the original. Ircam Granular is licensed from the French Ircam music research institute. It uses a much more sophisticated kind of granulation to stretch sounds. It can seamlessly time-stretch just about any sound, although you do have to fiddle with the settings a bit. The most critical control, I’ve found, is the “jitter” control, for “realistic” time stretching (“realistic” in the realm of virtual sonic fictions, that is!). Ircam Stretch uses a phase vocoder FFT algorithm to stretch sound. It’s extremely CPU intensive, but with the right settings, sounds absolutely delicious. And I recently made a program with it where I stretched a piano sound, then exceeded the polyphony limits my Toshiba XP i3 machine could deliver, and the glitches made were absolutely gorgeous. So even driven into malfunction, the Ircam Stretch produces gorgeous (though maybe unintended) sound. Any and every control on all the sampler modules can be MIDI-controlled, and once you drop a sample into the program, you can freely alternate modules, hearing how it can be treated in each one of these.
In addition to sampling, there are several kinds of oscillators: Analog, Analog Stack, Noise, FM, Wavetable, Drum and Organ. Analog is an analog oscillator emulation with the usual suspects: sine, square, triangle, saw, pulse and noise. Using this and the full range of LFO controls, VCAs, Filters etc. (more on these below), one could use Mach Five Three as a fully featured analog synthesizer emulation. The Analog Stack is eight of these oscillators ganged together, for some work with additive techniques. Noise has eleven different kinds of noise, some with quite effective controls for varying timbre. FM is a four-oscillator FM synth with twelve algorithms for connecting the four oscillators. This one sounds great. Wavetable provides dozens of preset wavetables (short waveforms) to play with, and you can drop and drag your own into them. Drum is a combination of a gated tone and a gate noise. At first, when I read about it, it sounded very uninteresting, but I was delightfully surprised to find that it made a wide variety of very attractive percussive sounds. Organ is a drawbar organ emulation. Very good sounding, and very CPU intensive. You won’t be using this one to play Jimmy Smith licks live on your netbook with (believe me, I tried!), but with a powerful enough computer, it shines.
The program uses a hierarchical organization – Oscillators (whether sampled or waveform based) are grouped into Keygroups, which are grouped into Programs, which are grouped into Multis, which travel to a Mixer, which has lots of Aux sends and receives. The Keygroups go through a selectable patching of VCA, LFOs, Envelope Generators (several types, including user drawable multipoint envelopes), and Filters (dozens of types!). There can be an unlimited number of Programs in any multi, and Mach Five Three can accept up to 64 MIDI channel inputs at any one time. For a recent improvisation, I programmed 16 Programs into a Multi. Each Program had between 10 and 128 samples in it. Each program was on a different MIDI channel. I played this with my Roland PCR800 keyboard. The program was on my Dell XP Centrino Duo machine. The playing of the sampler, and the sound quality was absolutely flawless. Latency was pretty near close to zero – playing didn’t feel mushy at all.
The hierarchical organization means that an effect, for example, can be applied at any level of the hierarchy. You can have effects applied to Oscillators, Keygroups, Programs, Multis, or as sends and receives in the Mixer. Routing can get as complex as you want.
Many people buy a sampler for its content. I’m not one of them, being a roll-your-own kind of guy. The program though, comes with 45GB of content. (I just went out and bought a portable 1TB drive for these, and then proceeded to load the rest of my 30 years of samples onto the hard drive as well – my whole sonic history on one handy drive! Which of course I then immediately backed up.) Some of these are very sample intensive single instruments, such as Piano and Guitar. The scripting on these is very good, although you’ll need a hefty CPU to handle some of these such as the piano. The Universal Loops and Instruments is a very fine collection of General MIDI and bread-and-butter timbres, and the MachFive Biosphere set is a whole library of wonderfully glowing textures. Two DVDs are devoted to the UVI Xtreme FX set, which it looks like it will take months to get through. The effects seem quite well recorded and I’m sure will prove useful. As well there are excellent Piano, Bass, Drums and Guitar sample sets. I’m sure some people, more commercially oriented than I, will find these sample sets a real reason to buy the sampler.
In terms of importing other companies sample libraries, the program is very versatile. It will import EXS24 (including Garage Band instruments), GigaSampler, Kontakt 1-4 samples (not Kontakt 5 though), Sample Cell, SF2, and VSampler2 files with ease. Although not listed, I also found it would import many kinds of .sfz files as well. When I got the sampler, I scoured the internet for free samples in many of the above formats. I found that EXS24, Giga, SF2 samples imported with ease. Kontakt 1-4 usually imported well. .sfz files imported the samples, but usually some programming was necessary. And UVI .dat and .ufs files are the native format for it, so those will of course import with all programming and scripting intact. Importing a variety of sample types with all the programming is quite a feat, and if the importing is not perfect across all platforms, it’s still very good and will allow you to at the worst import the samples and then make a Mach Five native program with them. For the EXS and Giga sample sets I found, however, no additional programming was necessary. I have a large library of VSampler3 samples made over 15 years. For these, I found that VSampler would save them as Soundfont .sf2 files, which easily imported into Mach Five. The 16 Program Multi mentioned above was quickly and easily mostly made with .sf2 files imported from my VSampler3 library.
Scripting is supported, as is a very powerful arpeggiator. This is an area I haven’t gotten into yet, but on the basis of the features available in the F Grand 278 piano sample set, and the Telematic guitar set, it looks like it is deep, comprehensive and very powerful. The scripting is done in the Lua language, which I’ve been meaning to learn for the past couple of years. Looks like it’s time to do that.
Microtonality is also supported. It is extremely easy to implement, and each program can have its own tuning. The ease of tuning was the reason that I chose Mach Five Three over Kontakt 5. Having written tuning routines for Reaktor, I know how convoluted that process is. With Mach Five Three, one simple loads a Scala .scl file for a 12 note scale, or a Scala .kbm file to put in the mapping for a scale of more than 12 notes, followed by the Scala .scl scale for the desired tuning. And when a program is saved, the tuning data is saved with it. So my requirement that a sampler be able to give me easy, multi-timbral, multi-tuning resources which are recallable instantly is more than met by Mach Five. For a micro-tonalist, I think Mach Five is the only choice.
As mentioned above, there are about 47 different kinds of effects (more were added in the latest 3.2.0 update), which can exist in any kind of chaining at any level of the hierarchy. Each effect has a number of presets and each allows the user to make and store their own presets as well. And of course every control is capable of being controlled by MIDI.
As you can see, the program is very deep, and a LOT of thought has gone into it. I wouldn’t hesitate to recommend it to anyone, with the proviso that for best performance you’ll want a hefty computer with a lot of CPU and a big fast hard drive to keep samples on. I have a LOT of software synthesizers (I especially love my family of LinPlug machines), and I’m certainly not giving up on those, but I can see that a lot of my work in the future is going to be done with this very sophisticated and marvellous piece of software.
Making music with animals is not just some hippy-dippy jamming in the woods, but a serious investigation of animal communication systems. Find out more in this book review.
by Warren Burt, Nov. 2013
David Rothenberg is a philosopher, scientist, musician and writer. He teaches philosophy at New Jersey Institute of Technology and is well known for his ongoing series of books and CDs in which he explores the sounds of our acoustic universe. Among the most recent of his books are Why Birds Sing, where he investigates not only the nature of bird song but also reasons for it. For those who dismiss this question with “It’s all just instinct, or territorial defence, or mating,” Rothenberg has quite a few surprises in store. Thousand Mile Song was a book about whales and their music, while Survival of the Beautiful was a serious investigation, deriving from Darwin, as to why there exists a sense of beauty in nature, and how a number of species seem to share this sense. Not since Charles Hartshorne (1897-2000) has there been a philosopher who has investigated nature, and the reasons behind our perception of nature, so omnivorously. And Rothenberg points out that the progress of art in the 20th century has not only been influenced by science but has also influenced it. As Jaron Lanier sums up Rothenberg’s argument, “He argues, among other things, that without modern art, modern science would have been hobbled by inadequately challenged cognitive habits. Beauty evolved.” This is quite a challenge to those in the scientific community who see all cultural activity (including all of music according to Stephen Pinker), as simply excess baggage or ornamentation to the basic thrust of physical interaction.
His latest book, Bug Music: How Insects Gave Us Rhythm and Noise, completes the trilogy set in motion with the bird and whale books. Each of these books is also accompanied by a CD, in which Rothenberg attempts to make music with each of the animals in question. In some cases, the interactions that result are quite remarkable. Making music with animals is not just some hippy-dippy jamming in the woods, but a serious investigation of animal communication systems and the systems and abilities we share in common with other species. And the playing with other species reveals principles that work in human music as well. “If you can’t hear the whale, you’re playing too much,” Rothenberg states. As a teacher of improvisation, I frequently give the same advice to my students. With Bug Music, though, Rothenberg has taken on a bigger challenge. As he himself states in the introduction, even the hardest sceptic can accept that there is something musical to the sounds made by birds and whales. They are, after all, higher intelligences with complex brains and at least a modicum of self-awareness. But bugs? Insects are simply programmed automata at the lowest level of invertebrate intelligence. And hive intelligence, such as exhibited by bees and ants, is simply an emergent behaviour, so the argument goes. Their sound, surely, must just be a by-product of instinctual behaviour with nothing more behind it. And during the hatching of the cicadas, which fill the air with extremely loud droning, more than simply an annoyance.
Rothenberg again disputes this. First of all, he’s in love with the sounds of the cicadas. And then, as he researches the area more and more, he finds that the sounds made by insects vary widely in complexity, purpose and (from our sonic perspective) beauty. He even attempts music making with insects. The results of this can be heard on the accompanying Bug Music CD. What is very surprising is that in many cases the insects DO respond to the human music making activities, and some kinds of interaction do indeed happen.
Rothenberg has a lovely writing style. He combines straightforward science reporting with anecdotes from history, and descriptions of his own activities. I wish more science writing were as good natured and fluidly written as his. You’ll be reading along, and suddenly you’ll realize that you’ve read a very well digested summary of a complex scientific paper, AND that he’s related the material in the paper to other sources of information in a way that the original authors couldn’t do.
The book is filled with amazing facts. One that has great relevance for music is his looking at a spectrograph of Mbuti women from the Republic of the Congo singing on a warm evening. The spectrograph not only shows the women’s voices at the bottom of the spectrum, but arrayed above them, all the different species of insects in the jungle, each in their own frequency range. Each different species occupies a different niche in the sonic spectrum. First noted by synthesist and acoustic ecologist Bernie Krause several decades ago, the book gives several elegant examples of this. Anyone who has read a popular audio magazine article on mixing modern dance music in the past 10 years is well familiar with this phenomenon. The articles, without exception, always tell you to filter or EQ one instrument or another so that it doesn’t interfere with the ranges of the others. Why would they do this, or rather, why do we hear this way? Because we’ve been hearing acoustic environments defined by sonic niches for all of our evolutionary history. And this is not the only area where insects have influenced human music. The book abounds with examples of where the sounds of insects have informed human music making, not all of them as trivial as, say, The Flight of the Bumblebee.
And as for insect sounds being simple and repetitive, Rothenberg has a number of surprises here as well. He interviews composer and acoustic ecologist David Dunn about his work with the sounds of underwater freshwater insects and the sounds of bark beetles in pine trees. Both of these are so soft that humans can’t hear them. But when heard through specially designed contact mics, sounds of incredible complexity are revealed. Similarly with scientist Reginald Cocroft’s work on the sounds of tropical treehoppers. These sounds are so soft that they can’t even be “heard.” Laser accelerometers and vibrometers have to be used to bring their vibrations into our hearing range. When they are, sounds that rival the complexity of birdsong are revealed.
A rich combination of philosophical arguing, science reporting, narrative stories about sonic interactions with insects, and reporting about meetings with a wide variety of folks, Bug Music may not convince you that the cicada screaming next to your window is beautiful (although he did convince me), but it will keep you interested for hours and open up for you a sense of wonder and questioning about the diversity and richness of our sonic universe, and our place in it. Highly recommended.
www.bugmusicbook.com (this website contains some great videos and sound clips of things discussed in the book)
If you don’t do anything regarding presentation – exposing your song to wider audiences – nobody will. If you don’t have a video, then you don’t exist – at least not in the music world.
by A. Arsov, Nov. 2013
I wrote a similar article a few years ago, but there is no harm if we do things again, as we all concentrate ourselves on making music as good as we can, without being aware that a good song is just a good beginning! The good old times when Indiana Jones flew around discovering undiscovered things are gone. If you don’t do anything regarding presentation – exposing your song to wider audiences – nobody will. Kids, and even some senior folks, don’t care much about Soundcloud because they all watch their favorite music on YouTube. If you don’t have a video, then you don’t exist. At least not in the music world. The only other root to the heart of the listeners is live concerts. I’m afraid that there is no third way.
The good news is that making a music video is not such an expensive thing as it use to be. You will need a few hundred euros for the first video, but all the equipment you’ll get will serve you for all sorts of purposes, and you will able to do your next video clip for no extra money.
The first thing you’ll need is a good camera. The only condition is that it needs to be a Full HD DSLR camera. The best budget cameras for that purpose are Canon EOS D600 (or D550 or D650) or Panasonic Lumix GH2 or GH3 cameras. There are always some special sales with big discounts offering some essential lens along with the body of camera. So for €400 to €600 EUR or about $700 USD, you are a good ways towards making a really good masterpiece. The only thing that you should take care about is to get a lens kit included with this price. To quote the Bible: “Search and you shall find.” Of course if you have a more expensive camera already, then just ignore this first part 😉 . Also if you have a Nikon with Full HD ability, you can use it; it is not so good as the Canon or Panasonic, but it will do the job.
If you are looking for that well-known “movie look” where the main object is sharp while everything around is blurred (focal depth), then you will need a lens with a focal depth of 1,8 (usually printed on a lens as “f/3,5-5,6”) That is a lens that you probably will not get along with your discounted kit. Buying an additional new lens is one option, costing around €100 to €150 EUR or $170 USD. A much cheaper option is to get a 42mm lens adapter for your camera on eBay, which will cost you around €3 to €5 EUR or $6 USD, or to search for some old second-hand 42mm lens with f/1,8. If you are lucky, you can get one for €30 EUR or $40 USD. The only deficiency is that you need to manually set the focus, but with digital cameras this is not so impossible any more as you can use digital zoom, focusing your objects, and then simply zoom back to normal view.
The next thing you will need is a photography lighting kit; there are plenty of them on eBay. Also, blue screen is a very handy thing that you can also get on eBay, costing between €20 and €50 EUR or $30 to $70 USD .
There are plenty of video clips on YouTube explaining how to use all those weapons. More or less there are only two issues: 1) Don’t overly light objects or your band members will look like they are coming directly from Tahiti, and 2) Don’t forget to put some light behind the people that are standing in front of the blue screen (otherwise your software will have problems in cropping the object properly).
If you want to be totally in, looking über pro, then you should also consider buying some sort of slider for your camera. You can get short models for under €75 EUR or $100 USD. It will bring some movements into your static cadre.
The next thing you will need is a good location. Just take a short walk around, and you will soon spot some fine places. In all my previous videos, I got all the places for free. All you need to do is ask the owner and explain to him that his place will be seen on a video. That’s the moment where most of them will say “yes,” and even more, they will be glad to offer you some help. It doesn’t happen every day that someone is shooting a video in their place or neighborhood.
Work In Progress.
OK. It is time to start shooting. It is always best to combine scenes where the band plays the song with some other scenes that are not music-related, which brings some story-telling into your video. It depends on what your song is about, but mainly some essential place representing the topic of your song could make wonders. As you have only one camera, repeat some cadres with different perspectives. In the first take you can shoot the whole band from a few meters distance, then make a few additional takes capturing just faces, or the upper part of bodies, or exploring some additional interesting angles. Just take care that the band members or actors don’t change position from take to take, as it would be strange that the bass player is on the left side and then, a second later, he is on the right side in a closer shot.
Later, when you build your video, you will need all that extra material, because a general rule is that you need to make a cut at least every four seconds. It can be the same scene, just zoomed in or out, and as long as you don’t make any cut using the same perspective without changing angle or zoom, you are on the right path.
Also try to tell the story from beginning to end. The main character should transform throughout the video to a different person, the person that resolved some issue that your song is about. (Except if your song is about nothing, then this could be also resolved in that way.)
Everything Is Sorted, What’s Next?
It is time for Cyberlink PowerDirector. I made all my previous video clips with Magix Movie Edit Pro, but if you’ll do some internet research as I did it, you will find that Cyberlink PowerDirector offers more for less money. Not only does this program offer all the goodies you will need for finishing your task, it also offers an almost endless quantity of all sort of things that can be downloaded from DirectorZone, where various users upload their effect settings, color presets, templates, or even tutorials. On the Cyberlink site you will easily find video tutorials which explain how to start and even how to become a semi-pro in just a few days. Don’t be afraid, the program is pretty intuitive, and with a little help from the tutorial video clips, you could start messing around in a less than an hour. It took me less than a week for my first video. The truth is that Cyberlink PowerDirector is so buffed with all sorts of fancy video effects that most of your time will be spent toying with the endless ability of the program itself. For less than a 100 bucks you will get a very professional movie-editing program.
You may ask yourself if you really need such software if you already have Microsoft Movie Maker. The thing is that MMM is good for handling some home movies, in order to present them on YouTube or at family meetings. As you are working on your video, it is not only that you need all those special effects for spicing up and making your video more interesting, but it is also a matter of quality: Various TV stations, MTV, or even local ones, have some strict requirements. It is not only a question of the HD standard, but they are also looking for the general quality of the video, color, grading and granulation. What is good enough for mobile is not good enough for High Definition plasma TV, and yes, with Cyberlink PowerDirector and Canon or Panasonic, you can really make very professional videos, even if you are a beginner as I was not so long ago.
Among other goodies, Cyberlink PowerDirector offers you TrueTheater plug ins to fix your badly captured movies from various SD cameras, making them look like HD. OK. Don’t use this for the whole video, cuz after all HD is HD, and there is no effect that can convert SD to HD without loss or bad points. It is not a matter of plug in quality, but mainly a problem of SD quality; but for short shots, it does wonders. The same package offers Stabilizer, De-noise and Lighting correctors. So your shaky, bad-lighting scenes with too much ISO can be corrected with this software. ISO? What is ISO? You will find it on every good camera. Read the manual and consider one thing that is not described in the manual – always use low ISO values: If everything is too dark, use longer shutter speed rather than higher ISO, as otherwise you will get too much noise in your video and even the almighty Cyberlink PowerDirector cannot help you.
So back to business. Now, when you have enough material and all the tools, it is montage time. If this will be your only video, then you can buy Power Director Ultimate for $99 USD. If you intend to make more than one, then I would recommend Power Director Ultimate Suite for $209 USD. The main difference (OK, there are more differences, but this one is important for you) is ColorDirector, because as soon as you show your product to a professional as I did, he will first ask you if you did the Grading, then ask why you didn’t do it. To make a long story short: ColorDirector can do wonders. Most of the material you’ll shoot needs some sort of color adjustment. Some cadre needs just slight improvement, as you didn’t set the right value for white color, while another needs to be adjusted to match to the general feel of the video. It is a paint desk that can change the overall mood of your video. If you are into photography, then add a few bucks and buy Director Suite 2. (This is not an advertising article, so if you are interested, please find more on the Cyberlink site.)
PowerDirector is a pretty simple program, and there are more than enough getting-started videos on their site. The program offers almost everything that all the other extra-pricey programs are offering, so take your time and try some crazy effects that can change your boring, everyday experience into a wild adventure. (It is not a documentary, so too much is never too much in the MTV world.) Even a simple thing like motion blur can make a difference. And there are a lots of different effects, transitions, particles, Pip objects (the overlays, frames and similar things). Most of them are intended for home video use, but you are free to abuse them in an unusual way. If you have time, I’m sure that you can find some even more music-related additions while digging through DirectorZone’s zillions of additional effects.
There are also plenty of other alternatives on the market — quality programs that offer different things, additional effects and other goodies…. I’ve tried a few others, but this one is the only one that played Full HD videos without any problem on my old dual-core computer. It is a bit painful when you can’t properly preview your big video files during the editing process, especially if you do some animations. (Yes, this program also offers this – even free hand animation are possible with this tight-budget tool.) After some additional research I found that this is not the only advantage of this program.
Thanks to the user base represented in DirectorZone, it is proven that Cyberlink Power Director offers the most for their money at the moment. I also find some additional tutorials on YouTube presenting some great tricks and tips that can be done with this program, so spinning the heads of your band members inside flying balls is just one among many things that can make your video stand out from the line. So, take your camera, grab some photography lighting kit, set up your green screen and go, go, go!
You have a great song, but trust me, this is not enough. So grab some equipment and do something for yourself. Don’t blink in the dark, impress the world by spending less money than you spent on your last three musical plug-ins.
We want you. If you did any video with the described budget tools, let us know, and we will add your video under this article.
More about Cyberlink Power Director you can find on Cyberlink site.
Christian Siedschlag, the man behind the DDMF, knows a thing or two about equalization since he created a respected EQ plug-in. He shares some of his expertise with us here.
DDMF – it is a name that suddenly popped in a field crowded with a big names, representing an independent “one man” company that become a subject of a many rumors on various music related forums, such as “Did you hear about that cheap equalizer that sounds better than most of the expensive ones?” At the time, I was involved in an extensive search for my main equalizer.I tried most of the expensive ones and they sounded good. But some of them didn’t had enough bands, other offered editing only through the knobs, some of them even don’t sound as they should. Some sounded good but were CPU intensive. I didn’t expect too much from this newcomer. What a mistake.
To make long story short, Christian Siedschlag, the man behind the DDMF, created an equalizer that became my main secret weapon, one that I’ve used for many years on every song on literally every track. His IIEQ Pro can sound analog or digital, it offers a wide range of filters, and what’s more important, it sounds right. It is light on CPU, easy to operate and no matter what you use as a source, it always sounds natural, no matter if you cut or boost the material.
We all use equalizers, but what do we know about them? Let’s draw back the curtain to discover some basic facts about this tool, and even more, to unveil the secret – how those tools are made in the first place. So, let’s hear the truth from the big master – Christian from DDMF.
More about DDMF on http://www.ddmf.eu/
Equalization for Rookies…
Remember that old saying “Talking about music is like dancing about architecture”? Well, to some extent that statement is probably also true when talking about the art of mixing music: it remains an abstract matter when you don’t have actual sound examples at hand, which is necessarily the case in an article like this. Plus, many of the world’s greatest mixing engineers “feel” more than they “know” what to do in which situation. Fortunately (for our purpose), I’m definitely not one of the world’s greatest mixing engineers but just a developer of audio software. By nature, I’m forced to translate my intuitive understanding of things into working software code, which is why there’s good reason to hope that I might be able to explain a thing or two about digital signal processing, and about equalization specifically, which is the subject of the present article. The target audience, as stated by the title of the article, is “rookies” so for the more experienced readers a lot of things will probably sound familiar.
What is Equalization?
Let’s start with the definition: what is equalization? These days we fortunately have Wikipedia, so we can readily answer “Equalization (British: equalisation) is the process of adjusting the balance between frequency components within an electronic signal“. This definition implies that any electronic signal has frequency components, and indeed it is this mathematical fact that lays the foundation for all techniques of equalization. Sound, if viewed from an engineering perspective, is nothing but a sum of individual frequencies (sine waves) played together. The relative strength and timing of these frequencies with respect to each other uniquely describe a certain sound (or a whole piece of music) and is called the “spectrum”. Equalizers are devices (real or virtual) that allow the mixing engineer to lower or raise parts of the spectrum without affecting other parts of the spectrum.
In Fig. 1 we see an example of a spectrum, frozen in time. The horizontal axis is the frequency axis, showing at what frequency a contribution to the overall sound is oscillating, while the vertical axis measures the strength (loudness) of each contribution. The frequencies are measured in Hertz (which are just oscillations per second) and the loudness in decibels (dB). The huge peak at about 80 Hz is the bass drum. We see a lot going on in the range between 200 and 5000 Hz, which is where vocals and all the other instruments are fighting to be heard, and then a slow drop off towards higher frequencies. The sharp drop off at about 16 kHz is a result of mp3 compression, by the way. The flat white line is the equalization curve, as the screenshot is taken from one of DDMF’s EQs, without any equalization applied.
The Task of EQing
You’ve probably already come across the following situation a few times: you have a track with a number of instruments and maybe some vocals which all sound great when played individually, but after mixing them together, it just sounds muddy and crowded. That great guitar line suddenly is barely noticeable, and when you raise the volume on the guitar track, the vocals start to disappear. This is where you need to reach for your EQ. EQing is all about carving space in your mix so that the individual tracks don’t get in each other’s way.
In the beginning (and not only in the beginning) it is very helpful to have a chart that approximately shows where in “frequency space” instruments typically have the strongest components. There are a lot of these charts available on the internet, and one of them is shown in Fig. 2. These types of charts can help you when deciding where you need to apply equalization. Also shown are typical attributes that are often associated with certain frequency ranges, e.g. “warmth” at about 150-220 Hz. This means that when a track is lacking “warmth” that frequency area might be a good starting point for a little peak filtering. Which brings us to the next subtopic, namely …
Types of EQ Curves
There are equalizers out there which, using a technique called FFT, allow you to change the spectrum of a track in a free-form way. While this approach clearly offers the greatest amount of freedom and flexibility, you can also quickly experience a phenomenon called “choice paralysis”, where you have so many options and variables that in the end you can’t really decide in which direction you want to move. Especially in the beginning, it is much better to stick to tried-and-true “EQ curves” which over the years have been implemented time and again, partially because they were relatively easy to achieve with hardware, but also because they produce predictable and pleasant sonic results. These types of curves can be controlled with a very limited number of parameters, depending on the type of EQ you have. An equalizer typically offers a number of “bands” which are individual filters through which the signal passes in series. The simplest form of an equalizer, a “graphical” equalizer, only allows the change of one parameter per band, namely, the gain of the band (in dB). In the age of software EQs, however, this is an unnecessary restriction: here we’ll be dealing with the more general “parametric” EQs in which each band can be controlled by setting gain, frequency and something called “Q” or “width”, which determines the width of the range in the spectrum the band is operating on.
1.) High-pass/Low-pass Filters
These filters gradually block all frequencies below (high-pass) or above (low-pass) the band frequency. There’s no gain control, and the width can be used to change the filter response around the band frequency from very smooth to (in the extreme case) resonance-like. These filters are very useful to clean up the upper and lower end of the frequency spectrum. It is often a good idea, for example, to high-pass all but the kick and the bass at around 80-120 Hz. The lowest string of a guitar, for instance, has a frequency of 82 Hz, so everything below that can be safely discarded. Also, many engineers apply a low-cut filter to the whole bus signal at around 5 Hz, as this area can’t be heard by humans anyway and is only eating up unnecessary energy.
Shelving filters boost or cut the frequencies above/below the threshold frequency by a fixed amount of dB. When cutting, they are a little less drastic than low- or high-cuts in the sense that they do not progressively lower all frequencies above or below the threshold frequency. Again, Q is used to shape the response around the threshold region.
3.) Peaking Filters
Peaking filters can be used to treat an isolated range of the frequency spectrum. They have a center frequency around which the response is symmetrical. There’s a boost or cut by the specified amount of dB at the center frequency, with a smooth fall-off around the center frequency. The width of the “active” frequency window is set by the Q value. Peaking filters can either be used to enhance specific areas of a track’s spectrum in order to make it heard more clearly (something which is easily overdone, though, so be careful!) or to reduce “annoying” areas. For instance, in order to decrease the “muddiness” of a mix, it is often a good advice to apply a broad cut by 2 or 3 dB at around 300-500 Hz to the master (sum) signal.
4.) Bandpass Filters
Bandpass filters leave the center frequency untouched but cut out an increasing amount of dB with increasing distance from the center frequency. This type of filter can also be used to effectively decrease the necessary bandwidth to transmit a signal. A famous effect is the “telephone voice” which can be generated by a single band pass filter set to about 1000 Hz, with a bandwidth of about one octave. While often sounding thin when applied to a single instrument, it can help to make the instrument sit better in a mix and create space for the other competing tracks.
5.) Notch Filters
A notch filter is the exact opposite of a band-pass filter: it completely cuts the spectrum at the center frequency, and gradually less so around the center frequency, until it reaches 0 dB gain outside a window determined by Q. A notch filter is very useful for removing annoying or problem frequencies (for instance a 50 Hz humming from the power supply). A nice technique when you have the feeling that a track has some annoying component but you can’t exactly figure out where it sits is to apply a large-gain, narrow peaking filter and slowly sweep it across the frequency spectrum. When you have localized the problematic area, apply a notch filter there, with a width that’s large enough to be effective but small enough to avoid any unwanted side effects.
These are the basic filter types that are available in almost all of today’s (software) EQs. The implementation details may vary, especially when you start comparing Q values … almost every developer uses his own definition. Another point that’s influenced by the implementation is the CPU consumption. When the number of tracks to be equalized in your projects is huge, this is something that will definitely become important at some point.
Design of (Software) EQs
Although you can happily use your EQs without knowing too much what’s going on under the hood (pretty much like you can drive a car without having to be a mechanic), it can be useful or at least interesting to know at least a bit about how these filters are actually made. A thorough exploration requires quite a lot of background knowledge in math and would be beyond the scope of this article, but I’ll try to explain the process briefly, and in laymen terms as much as possible.
There are two basic approaches: the time-based approach and the frequency-based approach. What does this mean? Well, digital audio is, as you probably know, represented by samples that are being delivered at a certain sample rate, typically 44.1 KhZ. The time-based approach calculates the output of a filter at any time by “simply” calculating a weighted sum of the current sample, a certain number of sample before the current sample and (in general) a certain number of previous outputs of the filter. This is called “time-based” since it only looks at how the samples are coming in one after each other, there’s no direct attempt to measure the frequencies that are present in the signal. Nevertheless, with the correct weighting of samples, it’s possible to enhance or decrease the contribution of only a range of frequencies, just like in the filter examples shown in the previous section.
A very simple, intuitive example is a filter that calculates its output by simply summing the current and the previous sample. You wouldn’t expect that there’s anything useful coming out of this operation, but actually this is the simplest form of a low-pass filter! You can convince yourself of this fact when you consider a signal that only consists of the highest possible frequency that is available for the sample rate at hand (the Nyquist frequency). This signal consists of the sequence +1, -1, +1, -1, +1, -1 … wildly oscillating, as you can see. Now what’s the output of our simple filter? It’s always either (+1-1) or (-1+1) depending on the sample position; in any case, it’s identical to zero. This means that the Nyquist frequency is completely blocked. On the other hand, if we only have a DC component in our signal, the sequence of samples would look like this: 1 1 1 1 1 1 1 1… (or any other number different from 0, depending on the strength of the signal). Clearly, the output of the filter would be 2 2 2 2 2 2 … so the DC component is enhanced. All frequencies between are interpolated, which gives the frequency response of a low pass filter.
The time-based approach is working well in most situations, and there’s a whole theory behind it that is also concerned with how to find the weighted coefficients for the summing of the samples to match a given electric circuit as closely as possible (which opens the door for simulating “classic” or not-so-classic pieces of hardware in software). One issue, however, is that close to the Nyquist frequency, the filter responses often become less ideal since you’re always translating a system with continuous time (your analog filter) to a system with discrete time (your simulated filter). There are remedies and tricks to avoid this to some extent, but ultimately the “cleanest” option is to use frequency-based filters (or FFT filters, named after the technique of Fast Fourier Transformation which is usually used for this). Briefly, what is done is instead of summing and subtracting the sample values “live” as they are floating in, one waits a little while until one has enough samples at hand to perform an analysis of the frequencies that are contained in the sample set. The nice thing is that there is a mathematical operation (the Fourier Transform) which, for a number of samples, calculates the contained frequencies (the spectrum of the signal) but also, for a given spectrum, calculates the samples that produced the spectrum. There’s a one-to-one correspondence. This means that one can shape and bend the spectrum in any way one wants, and then calculate the samples that result. So if one wants more gain at around 100 Hz, no problem, just add a peak there, easy enough. Within certain limits, any desirable spectrum can be generated, which is why FFT based filters are usually what’s being used in free-draw EQs. But also the area around the Nyquist frequency, which is critical with the time based approach, poses no problem for FFT filters.
There are two draw-backs, however: the FFT operation is taking more CPU power than the time-based approach, and, since you always need a certain number of samples to get some precision in your frequency analysis, a delay (latency) is inevitable. This is why FFT filters are usually not considered “tracking EQs” (of which you place at least one instance on any track in your project), but they rather belong on the master bus.
While there’s obviously a lot more to say about equalizers and equalization, the material presented here should give you a good starting point to begin with your own journey into this field. I recommend that you not go into gear-hunting mode during the first few months, but rather find a cheap or even freeware parametric EQ and try to learn the basic principles first. You’d be surprised what a good engineer is able to produce already with low budget plugins. One thing to look for is the option for mid/side EQing, which is a simple yet effective method to treat your center and side signal separately (especially useful for creating a solid, “centered” bass and a more “airy” stereo field).
In the beginning though, the most important part is to learn how your specific mixing setup sounds (that includes the speakers and the room). It is definitely advisable to always compare your own mixes to one or more reference tracks which sound more or less the way you would like to sound, and to switch back and forth between the reference tracks and your own material frequently. The ear quickly adapts to changes in the frequency spectrum, and after a few minutes a 10 dB boost at 1 kHz will sound almost natural if you don’t have any standard reference to compare it to. Also, when it comes to frequency analyzers (like the one presented in Fig. 1), you shouldn’t use it too much initially, but rather train your ears first.
That’s about it! Hope you enjoyed the material, and happy mixing!
by Christian Siedschlag, DDMF