Except from Quora: Joseph Byrd, former composer-arranger-producer/emeritus professor/ (1960-2015)
Answered Thu · 3 min read
In 1978 I arranged and produced Ry Cooder’s JAZZ. The album was mixed by Warner’s senior engineer. He didn’t think much of me, so there was little communication between us.
During the week of mixing, I noticed that whenever we took a break, he would “zero the board,” that is, return every slider and potentiometer to its neutral setting. I thought this odd, because when we returned he would have to return everything to its previous position. I concluded that he wanted to prevent me from learning “something” he was doing, but I had no way of knowing what.
Naturally, I was curious. So I waited patiently, and eventually one day he forgot, leaving the settings as they were. Here’s what I discovered.
In addition to the EQ settings, which of course were different for each channel, every instrument had the same setting for the high end: the highest of the parametric settings — on each instrument or vocal — was turned to maximum in the 16K-32K octave…about 12 dB. I’d never seen anyone do something so extreme; the rough equivalent of turning the treble on an amplifier to 10.
This meant the top octave (from 16 Hz to 32 Hz) was boosted enormously, perhaps 3 times more than normal. Something like
Well, the range of human hearing goes up only to about 16–17 kHz (at age 21 I could barely hear a sine wave at 16.5), so why was he doing this?
When the LP was mastered, I heard the reason. The sound of each instrument was remarkably clean and clear, even on tracks with (for example) mandolin, guitar, mountain dulcimer, pump organ, cornet, trombone, tuba, drums, and percussion. This was no “wall of sound;” each instrument could be distinguished in the mix, even “located in space,” from front to back, as though they were playing in your living room. The album was Time’s “Record of the Year,” and won Stereo Review’s “Best Engineering” award for 1978.
See, that octave — 16 kHz-32 kHz — conveys information we don’t hear as “sound,” rather as “metadata”; but it’s not useless…our ears “hear” it as difference tones (difference tones are the difference between the frequencies of the real tones., and our brains receive it as clarity, position, and timbre detail). It’s not something you would normally miss…after all, few records have such standards, even toward the end of the vinyl era. But when you can actually compare the LP with the CD, the difference is clear.
OK, then why isn’t the same data available on CD?
The reason is digital memory. A CD consists of two-channel signed 16-bit Linear PCM sampled at 44,100 Hz. But, considering that every instrument or voice has overtones of every frequency from 1 to 32 kHz, no CD has enough memory to store the vast amount of data. So the practice is to “guess” at the nearest frequency, which is useless in terms of replicating the highest octave, which contains the metadata. A vinyl disc doesn’t care about data; it simply replicates the frequencies it reads.
My engineer friend Garry Margolis adds this additional detail: “The background noise on a new pressing with high-quality vinyl will be no better than about 65 dB lower than the maximum signal loudness. On a 16-bit PCM recording, the dynamic range is 96 dB. That’s not so important in pop, rock, and jazz, but it’s very important for classical music. The dynamic range of 24-bit PCM and 1-bit DSD is significantly greater than that.”
Comments
Post a Comment