Thank you, I have done that. I didn't think my post gave the opposite impression, if it did, please understand that was an error on my part.
It wasn't response to you, but a general notification. If it looked like an attack against you, I'm very sorry.
Yes, but Cubase's SRC transition filter is shown to only go to -6 dBFS to -12 dBFS at 22-23 kHz. I could easily see that possibly causing audible alias bands.
But it doesn't. When converting into to 44.1kHz sample rate, audio in 22-23kHz range aliases into 21-22kHz range. I cannot consider it audible (unless we start talking about intermodulation distortion generated by the analog audio chain). There is a very good reason, why original CD audio standard is 44.1kHz instead of 40kHz: insurance against non-perfect anti-alias filter.
To me, the question remains: How do we know if these graphically obvious differences from ideal, and between different DAWs, are audible?
We know if we stop for a moment to think about basic physiology of human hearing.
EDIT: just to clarify:
1. It's absolutely impossible to hear distortion below -60dB of current signal level
2. We cannot hear anything above 20kHz (for me as an old relic this is more like 15kHz)
Just for the same reason I use SoX on final masters. For "pre-release" stuff I use Cubase. Why not use Cubase on final masters, as you make a strong case that any artifacts of Cubase's SRC are inaudible?
For the exactly same reason as peakae does use r8brain: just to feel better. Not having to worry about some exreamly unlikely situation that I have not taken into consideration. Just like:
a. I record (and mix and master) at 88.2kHz even though 44.1kHz is fine (just in case there are bad behaving DSP algorithms in my signal chain)
b. I do dither when reducing bit depth even though none of the music I produce requires it (because it doesn't cost anything)
c. I record 24bit audio even though at least in 99% of my recordings 16 bits will capture all the details (so I don't have to worry about those 1% of cases)