Interview with Andrew Souter of 2CAudio

SoundBytes interviews Andrew Souter of 2CAudio which developed the algorithmic reverb products Aether, Breeze, and B2, and most recently has introduced the highly innovative synthesis software Kaleidoscope.

 

by Warren Burt, Mar. 2015

 

Andrew Souter is the cofounder of 2CAudio, producer of such algorithmic reverb products as Breeze, B2, and Aether.  They have just released their newest product, Kaleidoscope, which is a radical reinvention of several sound synthesis techniques.  Andrew is also a composer, and a pianist.  His music is available on his Soundcloud site:

https://soundcloud.com/andrew_souter

 Andrew is pictured right, working on Kaleidoscope somewhere in the Gulf of Finland between Saint Petersburg and Helsinki in August of 2014.

SoundBytes: Tell us a bit about your background.  Musical background?  Did you study anywhere, and if so, with whom?  Technical background? Where did you learn your digital craft?  Which came first (if any) the music or the tech?

Andrew Souter: I have no formal education in music composition, performance, sound-design, DSP, computer science, advanced mathematics, or other related discipline.  I am for the most part self-taught in all of these areas, and I am a big believer in self-directed learning.  With enough determination and obsession, I truly believe a reasonably gifted individual can make meaningful historical contributions to his field within 5 to 10 years of deeply focused effort and sacrifice of blood, sweat, and tears.  It takes a certain iron will, some might stay bordering on obstinacy, but if you have this, and believe in yourself, and refuse to accept no for an answer, pretty much anything is obtainable.

I started my journey into music via an interest in ambient music and film scoring in my mid teenage years — around 14. I could not afford a synthesizer at the time, so I resolved to teach myself piano to begin with. Piano composition and performance are true passions of mine – first loves, if you will. They are my original art.  Ambient music, electronic music, dance music, scoring, sound-design, and eventually DSP and software development all followed directly from my desire as a composer to improve my craft. I am inspired by many things. Yes, I thoroughly enjoy highly technical things, and I am proud to say I love mathematics and geeking-out deep in DSP research in pursuit of developing never-before-heard, futuristic sounds. However, I just as thoroughly, if not more thoroughly, enjoy making music. I find it is necessary to remain balanced, and sometimes when we become out of balance, it can be good to go back to our first passions and do something purely human. I am ultimately interested in telling effective emotional stories and exploring what it means to be human. This is what matters most to people in the long run. Technologies come and go, but the human narrative remains. I go very deep into the trenches of technology very often, and do so with vigor and passion, but the motivation for this is to discover new ways to tell powerful stories. It is very important to remember this. Technology is designed to serve humanity, not the other way around. Thus, by practicing purely human creative acts once in a while, one can gain a critical perspective on the proper hierarchy of needs within musical creativity and product development.

I have an undergraduate degree from the University Of Southern California in Los Angeles, where I completed the Marshall School of Business administration and the Lloyd Grief Entrepreneur program in three years while on academic scholarship.  I was accepted to USC for computer science, but I switched majors before classes started to go through the prestigious Entrepreneur program because a) I always knew I wanted to have my own company and could hire people with higher technical pedigrees than my own as necessary, and b) I wanted to have enough free time to score student films to practice my passion of film scoring.  I did take a few classes on an elective basis with the Music Industry program at USC, and I did have a general education freshman class on Ethno-Musicology taught by a professor who had come from Yale and taught the graduate early music and Baroque classes.  I also took a student job in both the music library and the university computer store so that I could spend my time learning as much as I could about classical music theory and technology.  I read everything and anything I could on music and technology, and continue to do so today! I wrote an extensive business plan for a music software company based around the idea of algorithmic composition.  I was 19 at the time.  I also was already beta testing for companies like MOTU, Waves, Prosoniq, Sound Toys etc.

I did learn some great things from USC and I am proud of my time there, so it would not be fair to not give some credit, but generally speaking I am self-taught in the technical areas of music and technology. I have never had a single piano lesson in my life.  Sometimes I regret this, as I would like to attempt to give concerts one day perhaps if time allows, and it would be nice to have some form of rigorous training to avoid the possibility of performance train wrecks, but until someone invents time travel so that I can go back an enrol in a conservatory somewhere, I will have to live with the humble skills I do have if ever that day comes to give performances.  I have however, put in thousands of hours at this point — maybe tens of thousands, including many lonely nights playing in an empty church on the USC campus late at night while my friends where all out at keg parties.  I do the same now with DSP research, and I once lost a finance, whom I actually loved very much because I was so focused, she might say obsessed, on achieving greatness at what I was working on (Aether 1.5 at the time). Sometimes this is simply what it takes to be truly great at something.

The math, and science side of things comes naturally to me.  I read Curtis Roads’ 1,000+ page book The Computer Music Tutorial completely while still in high school.  Several times.  Again and again until it actually made sense.  I read the MIDI specification book during calculus class in high school.  I subscribed to the Computer Music Journal, the Just Intonation Network, all the regular magazines, etc. Basically I read and absorbed everything I could on the subject beginning around 14-15 years of age. 

As I got into technical sound design as mentioned above, this lead to me coding eventually.  I would say however, that I to this day, I am not a “proper” computer science person.  I am a hacker more than a proper coder.  Denis Malygin, my partner in 2CAudio is the proper computer science person of our team.  Today I code a couple thousand line test algorithms in C, and work out the mathematics of things we use.  I know enough to be dangerous, but I could not code our products completely by myself.  I know very little about things such as memory management, optimization in assembly language, systems coding, and host integration.  I am the idea guy, the creative visionary if you will, and additionally I work out the research and development and am very deep into the mathematics of the things we use.  You could say I specialize in the mathematics of aesthetics.  I know intuitively what is necessary and interesting to achieve the type of perfection that the world’s best composers, artists, producers, and sound-designers demand.  I know this because I am all of these things myself.  And somehow I can usually manage to heuristically determine how to accomplish our mathematical goals, even when my formal level of mathematical sophistication generally lacks the credentials one might think are necessary.  My advanced placement calculus class in high-school was the highest level of formal mathematics training I have ever had.  That did not stop me from learning Linear Algebra via MIT open course-ware on YouTube a few years ago, or reading all the DSP books from Julius Orion Smith at Stanford’s CCRMA (which I actually applied to in high school not realizing it was a PhD program).  You simply have to have the will to do such things.  And you might have to do it 50 times before it makes any sense at all, but that’s OK. The information is out there.  Imagination is more important than knowledge as Einstein said.  Knowledge is now available to anyone who would seek it.

One might summarize succinctly that I am artist with a highly scientific and analytically-oriented mind.

SB: Some of your tuning files say “2000-2005 Andrew Souter” in them.  These are the rather complex ones with irrational numbers divided into N equal intervals, etc.  This suggests to me that here was a period when you were very deeply into tuning.  Is that true? How did you come to be so involved in exploring the world of tuning?

AS: Dividing irrational numbers into equal parts whether on a logarithmic basis or a linear basis seems to me quite simple these days compared to some of the other things I now explore. 15 years ago in 2000 or so, however, this seemed highly advanced to me at the time.  LOL.  

Yes, there was a time when I was deeply interested in alternative tuning.  I still am.  I was introduced to these concepts while in high school by listening to ambient music legend Robert Rich’s early work.  Robert, whom I recently had the honour to work with on a film score, is a big proponent of the system of tuning called Just Intonation which uses simple whole number ratios to define musical intervals. This is in fact the most pure tuning method, in that it aligns most perfectly with the harmonic series which is fundamentally the basis of all tuning methods.  For example, a “Perfect Fifth” in Just Intonation is exactly 3/2.  In 12-Tone Equal Temperament this ratio is 2^(7/12).  This is quite close to being exactly 3/2, but it is not exact.  Other intervals including minor and major thirds can diverge by pretty severe amounts comparing Equal Temperament and Just Intonation.  Nature, and humans who are singing or playing non-fretted instruments tend towards Just Intonation.  It is the simplest solution, and often this is what nature prefers.  Equal Temperament is a compromise made to allow instrument builders to produce instruments with fixed pitches such as piano and fretted string instruments so that they can play in any key and handle harmonic modulation without requiring the instrument to be re-tuned. 

My interest in non-musical tunings followed from this as effort to generate novel special effects and sound-design elements for sound-design and scoring projects I was working on at the time.  The culmination of this research led to the Waveform tuning method in Kaleidoscope which effectively gives an unlimited pallet of experimental tunings within Kaleidoscope.  I am not a fan of atonal music, but there is no rule that says sound effects must be musical.  Simply consider sound-design for film for example.  Musical sound is a very small sub-set of all sound that is possible.

I have, in fact, just expanded the tuning systems in Kaleidoscope even more this past week and we will put this into Kaleidoscope 1.1, which will come shortly. It is then about perfect IMHO and don’t see how to improve it much further at any point in the future.  

SB:  In Abode of Light on your Soundcloud site, are those scratching sounds at about 16:00 in frog recordings processed through Kaleidoscope?

AS: There are no recordings at all!  What sounds like frogs, or crickets, or any other natural thing is actually Kaleidoscope. Wayfarer, Abode of Light, Cocoon, Sunrise in Bali, Kapteyn B, Elysian Fields, and the rest of the upcoming Art Official Life, are all entirely created with Kaleidoscope and our reverbs.  There are no additional source sounds from any other source!  Kaleidoscope generates all of the raw sound, completely by itself using only the internal white noise generator as the source signal.   All of sounds you hear are a direct result of power of the Kaleidoscope tuning, timing, and image modulation systems.  This is quite a deep thing to think about philosophically.  How is it that a visual audio effect/synth can generate very natural sounding results which emulate things we hear in nature, biology and space, as well as common musical structures?  There is no spectral analysis happening.  There is no sampling.  We are effectively synthesizing pure mathematical and logical rules, and yet they sound like nature.  Perhaps this is an indication that these are some of the same rules that govern everything that is found in nature?

SB: A lot of your own work with Kaleidoscope is drone oriented.  I’m wondering if you’re aware of some of the earlier workers in the drone-fields, such as Charlemagne Palestine, Eliane Radigue, and Phill Niblock, among others.

AS: I would add one little correction actually:  a lot of my EARLY work with Kaleidoscope is ambient/drone oriented yes.  Now that everything is working almost perfectly I am exploring all other uses as well. 

But to answer your question, this is partially because during development and testing of Kaleidoscope I would listen to white noise inputs to it as a control signal so that I knew what to expect as an output signal more or less, in effort to eliminate variables so that it was easier to find whatever was not working correctly.  It is quite hard to know if something is behaving correctly or not if you don’t know what that the correct answer should be.  So feeding Kaleidoscope complex audio signals often results in very complex outputs. The sound Kaleidoscope produces is a complex interaction of the characteristics of the input signal, the resonator settings, the current tonality or tuning, the visual performance data in two independent Image Maps, and the timing for each.   So if we can eliminate some of these variables during development it helps to understand the process.

I spent a few years feeding it white noise for many hours per day.  And in this case, the result can be considered a pure synthesis application.  It is no longer really an FX processor, as there is no incoming audio signal.  In this case it like a subtractive process, like sculpture.  We feed it an all-on signal which is spectral flat, like a big block of marble, and then we use the resonators to carve away whatever we don’t want.

In the process of doing this, I came to realize quite quickly that these type of results are perfectly suited for ambient music and drones.  Of course, this is perfected even further by feeding the result into one of our reverbs such as B2 or Aether.  Kaleidoscope into Aether or B2 produces simply unbeatable results for ambient music and film score ambiances.  I have dreamed about this stuff for 20 years and have now developed tools to make anything I can imagine in these areas creatively with a precision and fidelity better than anything I know of on the market.  It’s very exciting for me!

I actually don’t know any of the names you mentioned.  Sadly I am sometimes a little bad in keeping up with the creative work of my peers even though I respect their accomplishments deeply.  The simple truth of the matter is that when you spend twelve or more hours a day working on your own sound in some form, whether software, or content libraries, or music, it does not leave a lot of time for listening to a lot of other music.  My early ambient music heroes were Robert Rich, Steve Roach, Michael Stearns, Brian Eno, and everything else that was played on the Hearts of Space radio program here in the US.  This is the music that started me on this journey when I was 14 or so!

SB: I’m especially curious about what causes someone to gravitate toward such an ephemeral specialty as reverb.  That immediately leads to the question about the inspiration led to B2, Aether and Breeze.  What led you to dream up the notion of putting two independent reverbs in the same effect and chaining them or running them serially?

AS: I have personally always been extremely aware of spatialization in audio engineering.  This probably is a direct result of my early interest in ambient music, where reverb is often used so extensively and so pervasively that it may as well be considered a band member.  LOL.  I was also interested in things such as Binaural Beats and the “brain wave entrainment” via auditory stimuli at an early age.  I was interested in such things already in high school, and I had called the Monroe Institute, which was leader in this field here in the USA in the 1990s, and interviewed the lead developer there around the age of 17.  I remember he was concerned I was some competitor trying to gain IP from him, based on the sophistication of my questions.  If you ask me how or why I understood this stuff at such an age, I really can’t answer.  I have no idea.  I just did.

I bring this up to illustrate that at a very young age I was already effectively thinking about things such as spatialization, inter-aural phase, timing, gain, and spectral differences, and similar concepts.  In my Architecture Volume One library which I started development of around 1999, I already have audio samples that are designed to create Binaural beats for example.  My work in sound-design and electronic music, and dance music always had me thinking spatially.  Over time I became quite good at audio engineering, and I eventually did some mastering engineering.  A large part of this process for me was thinking spatially.  It is easy to make things loud these days, and really this has been the case ever since Waves L1, or L2 was available in the 90s.  A large part of making things sound BIG however, not just loud, is thinking spatially.

I did not necessarily set out to become an expert specifically in reverb.  My partner in 2CAudio Denis Malygin, had a former company, Spin Audio, before 2CAudio times.  Spin Audio made a few reverbs including “RoomVerb”, which was quite good for the time it was released.  Denis developed the very first test algorithm of what eventually became Aether 1.0.  Denis provided us with a reasonably good working basis.  Together we made it truly great.  The 0.9 algorithm was very heavily Denis’ design.  Denis is also another person who did amazing things at a young age.  He was 22 or so when he formed Spin Audio.

From the point of Aether 0.9 in December of 2008 to where we are now on the reverb side of things, I would say generally my obsession and perfectionism took over and started to lead development in terms of research and development, algorithm deign and feature additions. A lot of Denis’s time is spent having to work on “Systems Coding” as we call it.  The general “glue” that makes everything work together and not be completely broken when Apple or Steinberg, or Avid decide to change plug-in standards etc.  As I do not handle these sort of things, and as they inevitably slow us (and everyone else) down, this lead to me becoming impatient with the rate of feature implementation, so I began to learn more and more of these topics myself.  Today I handle this aspect of our business almost exclusively.  However, we have involved another developer as of the start of 2015, so we hope to find more research time for Denis as well going forward.

I suppose you could say it was Denis’ idea initially to do reverb, as he already has some experience in developing reverb, but it was my idea, insistence, and willingness to do what it takes, to make our reverbs the best the world.  A lot of our competitors glorify the glory days of yonder and try to emulate the great companies that came before us 30-40 years ago.  We have a completely different approach.  We respect the work of these companies, but we do not want to emulate them.  We want to become the company that everyone else will emulate 30-40 years from now!  We seek to be the modern equivalent of what Lexicon and Eventide etc. were in the 70s and 80s, and we seek to do so using the latest and greatest tools and technologies of the 21st century.  Perhaps that sounds arrogant, but I see little point in striving for anything less than perfection, and as crazy as it may sound, the evidence of our track record so far seems to indicate such things may indeed be possible.   We are now working on 2.0 versions of our reverbs.

Aether is a reasonably traditional reverb design, albeit a VERY flexible one with many unique features.  It has an early reflections engine and a late reflections engine.  They are two completely different technologies technically and they could very well be two separate products.  This is a very good design for modelling the behaviour of real concert halls etc. in a reasonably efficient manner. 

A lot of the music we listen to in modern times however is absolutely not strictly authentic in terms of the spatialization of the instrument components used in the mix.  What is the natural equivalent of a synth pad, or a flanger, or an auto-panner in the real world?   When we use such instruments and effects in a mix, the authenticity of the totality of the spatial image of the mix is already compromised from the perspective of the acoustic purist.  So, why worry so much about physical emulation?  It seems better in some cases to study all the psychoacoustic cues that are possible with stereo effects and build a system that spans the potential parameter space to give extreme creative control to artists, producers, sound-designers, and composers.  If you are scoring the Superman movie, or Tron, or Oblivion, why should all of your mix sound like it is recorded in a Baroque chamber?  It shouldn’t.  Often it is highly desirable creatively to be able to have things that are larger than life.  Such things elicit powerful emotions both in the composer who is inspired by the space to go creatively in a direction they might not have chosen otherwise, as well as the audience who are the eventual consumers of their message.  These are the things B2 focuses on.

SB: You mention that your recent pieces were made using just the white noise input to Kaleidoscope.  I’ve noticed that having complete control over tuning, and having two interacting image maps, puts Kaleidoscope well ahead of all the other “Graphic Synthesis” applications out there.  What led you to make the (what I see as) the two major innovations of Kaleidoscope – the use of two image maps, and the use of resonators, rather than samples or wavetables for the individual “line-voice” generators?

AS: You are correct.  As mentioned, the sound Kaleidoscope produces is a complex interaction of the characteristics of the input signal, the resonator settings, the current tonality or tuning, the visual performance data in two independent Image Maps, and the timing for each.   I would say each and every one of these points offers major innovations compared to what else is on the market.

First consider the input signal.  The fact that there even is any input signal already differentiates it from pure additive synthesis.  If the input signal is itself already a complex signal with complex organization in time and frequency, the result produced by Kaleidoscope will be much more diverse than is possible from using only a pure additive synthesis modality.

Second, our resonator models themselves represent major innovations in my opinion.  We did not simply look at the Karplus–Strong algorithm and stop there. No sir!  Instead we looked at the great minds that have come before us and we asked ourselves: how can we do something new and unique in this general family of things?  How can we advance the art?  So within the sole context of the resonator models, we have done thing such as employing double modes which reduce transient response and drastically increase filter selectivity.  We have added band pass filters and variable Q, including resonant filters, to damping.  We have added Spring resonator models which are arguably even more powerful than String models when dealing with a large number of voices.  We have added an FIR mode. We have found ways to achieve perfect tuning in all circumstances, when such things do not exist in the normal academic literature. And we have invented methods that vastly augment the utility of using hundreds or thousands of resonator voices, which are so unique AFAIK, that I would prefer not to even mention what they are, as it took it quite a bit of time to figure this stuff out since there was no precedent as far as I can tell.

Third, our tuning system is utterly unique and unprecedented in terms of the vastness of its scope.  This should be fairly self-evident, but will be even clearer when you see Kaleidoscope 1.1, and Architecture Volume Two.

Fourth, our image formats allow 16-bit per channel, and non-destructive remapping and manipulation of the Image Map performance data.  Our interpolation methods are highly precise, and our scanning time of images is sample accurate.  All combined this is a vast improvement over everything else on the market in these areas.

Fifth, and perhaps more directly to your question, yes, we use multiple images with independent scanning times to control the sound.  This is completely unprecedented.  Why do we do that?  We found, specifically when dealing with the more generative uses of Kaleidoscope where we are creating completely new content from scratch by using the built-in white noise generator to excite the resonators, that patterns obtainable within a single 1024*1024 image are quickly assimilated by the brain when dealing with shorter periods (i.e. faster scanning times.).  If for example we use the entire image to supply a one-measure pattern, regardless of how complex the image itself is, the listener’s brain quickly adapts to and assimilates the pattern it is being offered by Kaleidoscope.  Kaleidoscope is all about creating new musical (and sound-design) surprises.  We found that if we wanted very fine, short-duration time control to create nano-details in time-organization, it was desirable to have a method to make these details evolve over longer periods of time. Simply put, if you concentrate inhuman complexity into the time span of one second, the human mind still desires to have some form of evolution of the structure over larger time periods.  So if we want fast details, we needed a way to achieve this.  We could simply use huge images, but this not efficient.  If for example we have one image that has a scanning period of one measure and another that is 256 measures, and both images are 1024*1024, in order to achieve this with a single image, it would need to be 262,144 pixels long/wide!  This is simply way too huge to be practical.  By using multiple images, the complexity of the performance data we can achieve is vastly augmented and we can have precision control over both micro and macro details, as well as establish highly interesting musical time structures such as poly-rhythms and other things.

SB: The use of the combination of feedback, damping and “soft” wave-shaping curves creates quite a variety of timbres.  I’d be curious to hear your thoughts on how and why these kinds of controls evolved, and if you are planning to introduce any other kinds of timbre controls in future versions of Kaleidoscope.

AS: One of the problems of many of competitive products that deal with additive synthesis and/or image-based generation or manipulation of sound is that most of them have no intelligent way to manage high frequency content.  There is a tendency for users to perceive additive synthesis as being harsh, shrill, or thin sounding.  All of these adjectives really mean one thing:  there is too much high frequency energy in the output signal.  Simple tools such as low-pass filters on the output can of course be used to reduce high frequencies, but a 1st order filter has a slope that loses energy at 6dB per octave.  A second order filter has a slope that loses energy at 12dB per octave.  A first order filter would convert white noise into brown noise.  Generating a filter that converts white noise into pink noise, is surprisingly difficult.  There is no simple filter that will do it.  Pink noise loses energy at 3dB per octave.  It is also called 1/F noise, as the gain at a particular frequency is generalized to be the reciprocal of its frequency.   This also happens to be the spectral weight used in Saw and Square waves. 

1/F noise is universal throughout the universe.  It is pervasive. If you want create sounds that emulate nature, you should generally follow this rule.  There are variations of this theme in nature as well, and one can generalize and say there is a family of spectral weights that is (1/F)^p where p is some power between 0 and 2 or more.  It is therefore highly desirable to have a simple way to scale the gain of a voice based on some form of inverse relationship to its frequency.  This is exactly what the Soft control does. The Harmonic mode provides exactly (1/F)^p weighting.   Once this is accomplished, it is a short step to realize other weightings might have some creative merit as well, and thus we have several Soft Modes in Kaleidoscope.

The Damping and Feedback controls are simply fundamental to the nature of resonators, and they are explained very thoroughly in the Kaleidoscope manual.  There is not much more to say about them, other than to say, yes, in general we do intend to expand the timbral possibilities of Kaleidoscope even more in the future.

SB: I’m intrigued by the contrast between the beautiful simplicity of some of your piano music (on your Soundcloud site), and the wondrous complexity of Kaleidoscope.  Can you make any comments about that?

AS: My piano music is my most personal and intimate music. This is music that is much more about intuitive feeling than it is about logical thinking. It is highly emotional, and extremely human music that aspires to help us all, particularly us males, be more introspective. If we talk about concepts such as Yin and Yang energy in Chinese philosophy, my piano music generally aligns more with the Yin energy – the feminine creative spirit.  If we speak in terms of the language used by film director Terrence Malick in his film, the Tree of Life, my piano music is more aligned with and aspires towards the path of grace, and attempts to let go of the path of nature, or the Yang energy.  This is sometimes a hard thing for a competitive and analytically, logically, mathematically, and scientifically inclined male such as myself to achieve.  It requires a certain state of Zen “unconscious competence” to achieve.  You must let go of yourself and simply let the creative energy guide you. If simplicity is the result, as it often is, and the still small voice within says this is correct answer, you must learn to obey it, even when the male ego is begging you to let it show off more and use all the fancy tricks it has pridefully acquired over its transient experience here in this lifetime.

Conversely, I am also very much interested in logical thinking, and intellectual exploration and discovery. I am interested in the totality of human experience. I am in interested in scientific and mathematical understanding of the universe. I am interested in universal truths. I am interested in what it means to be human and our place in the universe. Studying the organization of sound, whether it is euphonic and harmonious, or challenging, dissonant, and atonal, can give one glimpses into the nature of reality. Music perception, and indeed perception in general, is based on experience. Exposure to complex and unusual sets of organizational principles can train the mind to make sense of more complex sensory inputs and data sets. This can, at least idealistically, serve as an impetus for novel thought patterns and discoveries in other disciplines. Music and organized sound trains the brain to think in new ways. Music can remind us how to feel, as well as inspire us to seek out new levels of intellectual discovery and analytical understanding. As technology evolves at an exponential or greater rate, certain aspects of humanity may struggle to keep up. Music and organized sound can both help us to remember what it means to be human and where we come from, as well as push the bounds of what we are capable of and where we are going.

Part of the fundamental design and purpose of Kaleidoscope is exploration.  Some of the many uses and applications of  Kaleidoscope that I find most rewarding myself are these sort of grey areas between Yin and Yang, or perhaps it is better to say when grace and nature are balanced and in harmony with one another.  These sort of states can usually be described as an almost mystical balance of predictability and surprise where tonality and timing is both familiar and comforting to some degree like some benevolent motherly force, and at the same time slightly beyond standard human comprehension pushing us to challenge ourselves to grow like a loving father.  These are the truly magic moments in Kaleidoscope for me.  I find these sort of results are invaluable to communicate emotions such as awe, and wonder, humility, and some form of respect for grandeur of the universe which is itself a perfect combination of Yin and Yang.  This is what my Wayfarer ambient music releases aspire towards, and it was also a very large part of the impetus of Kaleidoscope.

 

You may also be interested in

SoundBytes mailing list

Browse SB articles
SoundBytes

Welcome to SoundBytes Magazine, a free online magazine devoted to the subject of computer sound and music production.

 

If you share these interests, you’ve come to the right place for gear reviews, developer interviews, tips and techniques and other music related articles. But first and foremost, SoundBytes is about “gear” in the form of music and audio processing software. .

 

We hope you'll enjoy reading what you find here and visit this site on a regular basis.


Hit Counter provided by technology news