Our friends Audio16 boys reflect about the Equalization curves and all lead to RIAA... It’s all physics… Well actually ears….the desire to have a larger frequency range recorded and played.
The story all starts at the cutter head. The cutter head’s needle (cutter) is a generator (motor) that is driven from the (modulation) signal input. The cutter head translates the signal into amplitude swings (mouvments). The swing is correlated with the frequency and the stylus moves with an amplitude in relation with the frequency and level (cf. audio standards) The low-frequency signals have a much larger swing than high-frequency signals for the same input level. Now this is the issue….low frequency grooves would have to be very wide and therefore reducing disks recording time; each track on the record having to be as wide a the swings needed for lowest desired recording frequency. Plus the larger swings will invariably introduce more distortion; higher amplitude movement being by essence harder to control. So this was the very issue that western electric had to address back in 1920’s in order to increase the frequency response and today we all enjoy the very principles they established in 1926 when Joseph P. Maxwell and Henry C. Harrison set their minds to the matter.
The solution they devised for the task was to lower the swing amplitude at the lower frequencies when cutting the disk and then in an inverted manner boost them during playback….hey that sounds cool! This was achieved by inserting a filter in the audio signal. At the time it was named pre emphasis and its technology was taken from the telecommunications industry (yes bell labs and western electric…). Then on playback it was de-emphasis with filters, just the filters the other way around in reality.
Around these times (1920/30) there was not too much concern for high frequencies given the limited high frequency response of recordings at that era. Recordings and cutting equipment got better in the 1940’s and the need to cope with the higher frequencies became increasingly necessary. But here it’s the opposite….The small amplitude of swing generated by the cutter at higher frequencies meant cutter amplitudes that became too small and actually introducing issues as for bass but in an opposite manner. Here the same simple technology was employed. The higher frequencies being boosted during cutting and then attenuated during playback. Again simple filters were employed for this purpose.
So at this final stage and as we enjoy our records today the bass was attenuated and the treble boosted during recording to then have the bass boosted and the treble attenuated during playback. It took a while to introduce standards and reach the RIAA that is today’s industry norm.
So we must question whether the filters used at cutting stage match those used in out out precious phono stages….do they?