Pan Law for the biggest sounding mix

Yes, there are very few situations where one might be listening to mono these days, with perhaps most listening being done with earphones with stereo by default. However, when listening to sound without such isolation, in ‘free air’, the further from the speakers, the more mono the sound becomes.

Testing in mono is required to ensure that components of one channel are not detrimentally out of phase with those in the other channel. Those out of phase instruments or vocals will sound unnaturally diffuse, instead of being focused and clear, even with earphones.

Stereo enhancers typically work by making the ‘sides’ – components more in one channel than the other – more out of phase. However, in mono, some components of the results may subtract too much and appear low in the mix.

The typical culprit for out-of-phase signals is incorrect mic placement when using several near each other, even if they are not recording the same instrument/vocal. Sometimes toggling one or more channel’s phase switch will produce a less objectionable result.


Human hearing is actually using two mechanisms to tell the direction from which sounds come, with the transition point being the frequencies with wavelengths around the distance between the ears, centred about 1.5kHz. The mechanisms are:
a) Lower frequencies - primarily by the phase relationship between the sounds in each ear.
b) Higher frequencies - phase relationships become harder to discern, so the relative levels at each ear.

Basically, panning in audio devices is done by adjusting levels as in the latter method above, but for ALL frequencies, and it seems to work, or at least we are fooled enough that complex phase-processing algorithms are not required for low-frequency mixing.