Interview with Dan Lavry of Lavry Engineering

Why electronic engineering?

At 13 years of age, I looked inside an old radio. I found electronics interesting, and 50 years later is more interesting than ever.

Did you had any influences, people you looked up to?

I come from a very musical home. My father was the national composer of Israel. He wrote the first Israeli symphony, the first opera, many of the early Israeli songs. At home I met many of his word class musician’s friends (Stern, Bernstein, Adler and many more).

Regarding the electronics, I learned most of what I know from reading and experimenting. People I look up to? Bob Adams from Analog Devices and Dr. Rich Cabot (founder of Audio Precision) come to mind. I miss my dear friend David Smith from Sony music.

It seems that many top high end audio designers worked in IT or medical instrument field. Do you find any resemblances?

The basic principles of electronics are the same, but each type of application does have it’s own specific demanding requirements. Understanding audio takes a lot of time and dedication. At the same time, in my opinion, being too specialized can be very limiting. It is good to know about other areas of electronics. A wider view yields a lot of advantages. Often, techniques and methodology from one area of electronics can be applied to another area, such as audio.

What brought you into digital audio?

I am both a musician and an engineer, so I always wanted to get into audio electronics. By the mid 80’s I had much converter design experience for medical applications, weighing scales, telecom and instrumentation. Digital Audio was taking off, and with it the need for audio converters.

What do you think of digital audio era. Is analog on fading away or just becoming niche market?

The world is going digital for many good reasons:

Analog memory leaves a lot to be desired: vinyl wears out and scratches easily, Tape cassette demagnetizes and wears out. Both offer limited storage capacity, bandwidth, dynamic range… Digital memory is plentiful (CD, DVD, hard drives, memory sticks…). Also, the transmission processing and duplication of analog signals is subject to unwanted alterations due to environmental noise, circuit’s imperfections, interconnections, grounding issues… Digital technology offers a great deal of immunity to all of the above.

But sound is analog, so in order to take advantage of digital, one must convert the signals between digital and analog thus making the conversion quality very important. Many people talk about “digital sound”, but there is no such thing. The ear does not hear data made out of 0’s and 1’s. An analog signal fed into a perfect AD then to a perfect DA would be identical to the original analog signal. So all comments regarding “digital sound” are in fact directed to specific implementations.

Viewing “digital sound” as some conceptual limiting factor is incorrect. However, the implementation of converters tend to bring about different sonic character than the implementation of only analog circuits (converters consist of both analog and digital circuits). For example, poor digital may introduce alias distortions. Analog does not alias. Digital tends to yield better dynamic range then analog, and so on.

Many people prefer to continue with what they were used to using. For example, one may get attached to some old vintage Neumann mic. The microphone company still has the drawing plans and know-how for the vintage mic. The material technology, manufacturing process, precision, and the electronics have improved greatly in the last 50 years, so why vintage? Because many people like what they are accustomed to.

In the old world of tubes, transformers, and old transistor circuits, we could not get away from significant sound coloration (distortions). The manufactures of gear have been striving towards sonic transparency (improving the ability to capture and reproduce music as accurately as possible). The goal of transparency demands that much of the gear will not alter sound. The speaker should not compress, the AD should not distort… Of course, there is much gear designed purposely to allow intentional sound alterations (EQ, reverb, compression and more). People that like tubes and vinyl prefer the addition of certain vintage type sonic alterations to the sound of the original music performance.

Do you consider yourself and audiophile?

I am not much into such labels but I do seek to listen to music at a level of quality as close to the original performance as possible therefore I listen through my DA2002 at home.

Why did you start to produce audiophile products?

As I previously stated, I am a musician and an engineer. I play piano and accordion almost daily, and my weekends are dedicated to playing with my musical friends. I love combining music and electronics. Much of what I did was (and still is) driven by my own desire to hear the best sound possible.

What is the high end audio to you?


For me, high end audio is the ability to capture the music at any performance space, and reproduce it as clearly as possible within a listening environment. I do not have an issue with sonic alterations done by the mastering engineer, or by the final listener. I am not against tubes or EQ. But the starting point in my judgment must be transparency. One can add a pink tint to one photo, a blue tint to another photo. Artistic decisions are best added to a neutral and transparent picture. This is my philosophy. My gear is not intended to do the mastering or EQ for you.


For me high end audio is about transparency. When I hear the resonance of an important cello or violin, the depth and imaging within a choir I know my work has meaning.

Where is the line between hi-fi and high end?

I do not think there is a line between good hi-fi and good high end. I do see a lot of confusion regarding true high end and mediocre and sub mediocre gear. In my area of converters, there are a few well known main stream converter makers that do not even bother to provide a single specification regarding dynamic range, or distortion (not even for a 1KHz tone). That is mind boggling! I know that specs do not tell you everything, but the total lack of specifications leaves much to be desired. Don’t be fooled, missing noise and distortion specs are missing when they are very poor. Instead, of good specs, I see a lot of marketing hype, with phrases such as “full bass, warm mids and crisp highs”.

What is in you opinion difference between pro and audiophile audio? How you distinct your products between this two markets?

First, pro gear includes various tools and gear for music production, such as mic pres, AD and more. Audiophile gear mostly consists of a subset of music production gear, the listening portion (DA’s, amplifiers, speakers…).

Pro gear needs to accommodate the logistics encountered during the music production process. For example, professional gear may use analog voltage levels as high as 24dBu (34.72V peak to peak signal). Such high voltage work well when distributing signals over very long cable runs. Audiophile gear woks fine with about a tenth of such voltage level.

We see that year by year there is kind of race for bits and high sampling rates. Where do you think this will stop?

Regarding bits: The ear can not hear more then about 126dB of dynamic range under extreme conditions. At around 6dB per bit, that amounts to 21 bits, which is what my AD122 MKIII provides (unweighted).

Regarding sample rate: The ear can not hear over 25-30KHz, therefore 60-70KHz would be ideal. Unfortunately there is no 65KHz standard, but 88.2KHz or even 96KHz is not too far from the optimal rate. 192KHz is way off the mark. It brings about higher distortions, bigger data files, increased processing costs, and all that for no up side! People that think that more samples are better, and that digital is only an approximation, do not understand the fundamentals of digital audio.

What rate and bits are enough for today music reproduction and recording?

Regarding processing bits:

For music production, for adding and mixing many channels, for various digital processing, we need more bits. One must make a distinction between processing bits and conversion bits. Say for example that you have 32 channels, each channel made out of 24 conversion bits. If you sum the channels you end up with 31 bits. At the end of the process, the 31 bit sum can be reduced back to say 24 bits, or to 16 bits, because the ear can not hear 31 bits (186dB dynamic range). It is best to have a lot of processing bits. How many, it depends on the number of channels and on the type of processing.

Regarding the rate:

One has to make a distinction between the audio sample rate and the rate of a localized process:

The audio sample rate is the rate that carries the music data itself. Roughly speaking, the audio bandwidth itself is slightly less then half the sample rate. A 44.1KHz CD can contains music to about 20KHz.

At the same time, there are many cases when we use much higher “localized rates”. Such higher rates do not increase the musical content. The higher rates still offer the same original bandwidth of the sample rate. We up sample or down sample between localized rates for various technical reasons. For example, virtually all modern DA’s operate at 64-1024 times the sample rate speeds (in the many MHz range). Operating at such high rates simplifies the requirements of the anti imaging filter (an analog filter located after the DA conversion). The decision about the ideal localized rate depends on the technology and the task at hand. It is an engineering decision, not an ear based decision. As always a poor implementation may introduce Sonics, and it would be wise to refrain from the often encountered practice of far reaching false generalizations, so common in the audio community.

When CD came on horizon nobody talked about jitter. Now days everyone is having his own philosophy around it. Can you elaborate on this subject please. What is the real importance and how to approach this?

Jitter is not only an audio issue. I was dealing with jitter issues in medical conversion, way before the days of digital audio. Jitter is an issue for all conversion (video, instrumentation, telecom, medical, industrial controls…).

The concept of conversion is based on two requirements:

“Taking precise snap shots”

Taking the snap shots at evenly spaced intervals, and playing them back at the same evenly spaced intervals.

Think of a movie camera with an “unsteady motor”, or a playback film projector with a motor that rattles between too slow and too fast. Either case will distort the outcome, and the distortion depends on both the jitter (speed variations), and on the subject itself. Jitter would not do much harm to a steady object, but it does alter the view of a fast moving object. Similarly, in audio, the distortions due to uneven timing (jitter) is due to the interaction between the clock imperfection and the audio itself. Unlike tube, transformer or many other distortions, the outcome due to jitter is NOT predictable or repeatable. There is no such thing as “good sounding jitter”. There are many types and causes of jitters. What we hear is not only about the jitter amplitude and frequency. It is also about jitter type.

Conversion jitter may alter the sound significantly. At the same time, transferring data (that was already converted) between say AD and a computer, or between other digital sources and digital destinations, does not call for great jitter performance. It is only during the conversion process that jitter needs to be very low. Data transfer jitter is not much of an issue, when you are only moving ones and zeros.

It is relative easy to produce DA, AD converter, but it's hard to make it musical. What's is the secret of good sounding converter?

Engineering is the art of optimizing and compromising. Making gear musical does take lot of experience and understanding of the ear. I never underestimate the ability of a good ear.

What is exactly the function of DA and AD converter through your eyes?

Music and sound is air vibration. We pick it up with a membrane (a mic) and make it into an analog voltage. We amplify the voltage (sound), store it… and then we let the speaker (or headphone) vibrate the air and re make the sound.

It is very important to be able to make the same identical air vibrations at the playback location. The AD and DA should not alter the air pressure “patterns”. The waveform should be kept in tact, as much as possible. Therefore the AD function is to precisely translate the wave into numbers, and the DA to translate the numbers precisely back into the same analog voltage waveform.

Do you build your products USA?

Yes, everything is made here on the west coast of the USA.