Steinberg Media Technologies GmbH

Creativity First

Frankenstraße 18 b
20097 Hamburg

Tel: +49 (0)40 210 35-0
Fax: +49 (0)40 210 35-300

Interview with sound designer Dave Polich

Dave Polich has been working for more than two decades as a professional sound designer and is known as one of the most experienced synthesizer programmers in the world. Michael Jackson hired him as his master sound designer for the This Is It tour. For many years Polich has also been programming the Motif and other synthesizers for Yamaha. He has also been involved in the programming of the HALion Sonic library. A synthesizer veteran, Polich knows all the characteristic details of instruments from the '70s and '80s, which has also made him develop an impressive feeling for the legendary analog sound of the past. This can be clearly heard in the new Vintage Classics VST Sound Instrument Set for the virtual-analog Retrologue synthesizer from Steinberg. We talked to Dave Polich about his work as sound designer and the challenge of reproducing classic synthesizer sounds.

You have been working as a sound designer for a lot of high-profile artists as well as several major companies, such as Yamaha. Tell us a little bit about your musical background and how you came to programming sounds.

I started with piano lessons at age eight, and at age 13 picked up the drums and started my first band. Ten years later I went back to playing keyboards and bought my first synthesizer which was a Minimoog. It took me three hours to figure out that I had to move the cutoff knob to make the sound brighter. But I was hooked — I just thought that hitting the keys on a synth and making a fantastic new sound come out was the coolest thing, and I still feel that way.

I was always the guy onstage during band breaks twiddling knobs on my synths, trying to figure out new ways of sound making. I ended up owning and playing every “classic” synthesizer you can think of — from analog to FM to sample playback. In 1991 I got a gig doing sound design for Yamaha and I have been doing that for a living ever since — over two decades. I feel incredibly lucky to be doing what I love to do and making a little bit of money at it.

You're tenure as professional sound designer meanwhile spans more than 20 years. How have sounds and sound design evolved over past decades? What is so special about the sound of the '70s and '80s?

Probably the main difference between the synths of “now” and the “early days” is that the sounds can be so much more complex. You can put one finger down on a synth workstation like the Yamaha Motif, or play one sound from a virtual synth like HALion Sonic, and an entire orchestrated piece of music happens, complete with drums and percussion. 

In the early days, it was all analog — real analog, sound generated by voltage-controlled oscillators. And the first synthesizers were huge — the size of a refrigerator, and you couldn’t store the sounds in memory. But what made those synth sounds of the ’70s and ’80s so special was that they were alive — even holding down one note, the sound was always changing slightly, like a trumpet or a violin. It wasn’t static like a still photograph. It was almost elastic and rubbery and metallic and glassy, sometimes all those things at once.

And don’t forget, back in the ’70s and ’80s, synthesizers were “brand-new” instruments, essentially. They were starting to find their way into popular music and were being used to add what was then unique “colors”. Sometimes, the sounds of the synths were so unique and standout that you could literally tell what song it was just by hearing the synth sound first. The synth sounds used by The Who were a perfect example — everyone has those special sounds ingrained in their memory now. The same could be said for sounds used by Pink Floyd, Kraftwerk, Styx, Van Halen, and other artists — the sounds themselves were essential to the songs they were in and became the identifying musical “signature” of those songs.

What was the biggest challenge you faced while re-creating the sound of the classics?

It was to find points in the songs where the sound stood out enough that I could determine what went into its creation. In other words, whether the synth sound was not so “buried in the mix” that it was difficult to say with certainty whether it was a sawtooth or pulse oscillator, or a type of filter, or whether the modulation was generated by a square wave or a sample and hold, or what was being modulated exactly. A lot of times in music tracks, the synthesizer sounds have been equalized and otherwise processed with delays and reverbs as well as modulation effects, and one has to “see through” what were the effects to get to what the original unprocessed sound “probably” was. And many times, it was obvious upon listening that the sound was actually comprising two or more recorded tracks of synth playing at the same time… a “layer”. In the early days, synthesizers couldn’t make more than one sound at a time, so if you wanted complex sounds you had to overdub the tracks using different patches to make your layer. That was a challenge to determine whether I could achieve the “layer” or if I had to make separate sounds for each component of that layer.

Owing to the rapid development in technology, software synthesizers are becoming more and more powerful, therefore being used more frequently for music production and on stage. What do you consider the difference between today’s hardware and software synthesizers?

Software synths have an advantage in that it becomes possible to do things with them that is not practical with a hardware synth that has less computing power at its disposal. An example would be an additive or granular synth that achieves its sound through heavy CPU processing that is only available on a computer. Also, you can call up more than one instance of a software synth on a computer. In Cubase I was able to get up to 16 instances of Retrologue! And I probably could have gotten more. But I only own one Prophet synth. I don’t have 16 of them, and hooking 16 of them up at once would be impractical.

As computer power has increased, the ability of software synths to replicate the classic hardware synths of yesteryear has improved to the point where it becomes difficult to tell whether it’s a software synth or its hardware equivalent that one is hearing. And of course, on computers you can have soft synths that employ all kinds of synthesis — analog, FM, granular, additive, waveshaping, etc. — sometimes simultaneously. You can literally build big “monster” synthesizer setups on your computer that were impossible to create in the early days.