Where do you guys stand on Sample Rate (Khz)

Patanjali, I didn’t mean to insult you, it’s just that I think that science should come first, before everything else.

In my day time profession I have experienced so much cargo cult related behaviour in the past, I have learned to spot it from 100 kilometers against the wind many years ago. :smiley:

Of course, Bob Katz is an expert, someone I regard highly, but this doesn’t make him an authority which overrules basic scientific facts which are known for, hm, 87 years (Nyquist-theorem was 1928, I think).

Experts tend to stick a bit too much with beloved, outdated (and I have explained where this comes from: bad filter design!) knowledge - an example would be the fact that we still discuss the impact of Hyperthreading / SMT on Cubase, which was a bad idea in the past (because of the “Replay” speculative execution system of the Pentium IV CPU), but is usually a very good thing to do right now.

But, please let me answer the very interesting objections you have brought up:

Actually, no. I have provided the example of temporarily generating very high frequencies, which is a relevant factor for, to start with, saturation plugins. When downsampling afterwards again, low pass filtering is applied to prevent aliasing, no problem there.

Another example is filtering with resonance - a zero delay resonant filter is very hard to implement (takes a lot of processing power), with extreme sample rates a few samples of delay don’t distort the signal as much - so you can go with a filter design which is actually lighter on the CPU, even though it operates on a larger dataset.

Of course there are more possibilities.

Yes, this theorem IS of (some) relevance - thats why jitter free clock generators are so popular. However, it has nothing to do with the sample rate in itself, using the Cheung-Marks theorem to explain why there might be an audible difference between 44.1 kHz and 96 kHz of sampling is akin to using general relativity to explain the operation of a grandfather clock.

As I said - I’m willing to spend €€€ on nice hard- and software, no problem there - as long as I get something in return. Cubase Pro, for example, is an amazing deal, as was my SPL Gainstation or my Stratocaster, it’s all fine.

However, the other side of economics is the returns… and I don’t see any tangible returns from using higher sample rates, it just cuts my CPU in half (more or less, I have explained the additional factors).

I don’t think the name of one of the thousands of people who studied mathematics in Vienna would be helpful to you - however, I have brought up many questions (including “doubly infinite” - his answer was: “forget that, this is a mathematical construct, it doesn’t apply here”) and he answered them all to me in a fashion which seriously underlined my point of view.

True, my fault. There is even a formula for the minimum discreete symbol rate which one can actually derive for himself.

No, it does not only apply to infinitely repeated waveforms. This would preclude the representation of information per se, the main point of the Nyquist-theorem.

Did you watch the great xiph.org video?

Marketing. Bigger, better, faster, higher, more.

Not that it matters to real life, 96 kHz just sound more and so it can be sold.

If companies were fair and marketing was truthful, they’d never started this fad in the first place.

It’s as simple as that: building a low quality 96 kHz ADC is much simpler and cheaper than building a great 44.1 kHz one - and the masses demand easily digestable numbers.

THD, SNR?

You lose about 98% of the audience with those two terms - but “96 kHz is better than 44.1 kHz” is something even an intern at a musical instruments store is able to blurt at customers.