wavelab8 request(s)

In batch process, when the output format is non-linear, as it is with all lossy codecs, it would be nice to have a type of level normalizer that would 1) check the output of the decoder for clipping and 2) apply an appropriate degree of gain attenuation before the coder (also before dithering, if dithering is used) if clipping is actually found.