32-bit or 64-bit?

The 32 Float engine is in bit depth really good, in 44.1 Khz more then enough for audio.
The 64 bit engine is supposed to be better, but it will be only very slightly.
For me i would prefer to stay with 32 bit Float.

For those who say higher bit rates and higher sample rates are pointless - I suggest a little humility here. Can you make “Wavelab”? Can you make “Cubase”? Can you make Weiss digital processing equipment? Very few people in the world can do these things… despite that, do you really think these people are “stupid”? That they somehow don’t know what they’re doing and just add pointlessness to everything they do? Or are they more likely striving for the absolute pinnacle of excellence - even if it’s only a 1% improvement?

Anyway - with that in mind, I have to say - I have done extensive testing with both higher bit-rates, and higher sample-rates (including 192khz, DSD, etc). While I feel “recording” at higher rates offers VERY little benefit (but “possibly” some), I have come to believe that some digital “processing” can benefit from these higher rates as well. In the context of a software like Wavelab/Cubase/ProTools/etc, that “processing” occurs MANY MANY times throughout a mix… every single fader deviation on each and every track from 0db? Digital Processing. Every single plugin? Digital processing. Every summing occurrence? Every envelope? Digital processing.

Work at low rates if you want, but don’t get too arrogant about it. There are others in the world who may disagree with you, and they are not just the “stupid” bottom-of-the-barrel type people. Anyway… just some thoughts on the subject. We all need to trust our instincts and our hearts when making creative, critical decisions… just as the engineers who make our favorite tools do. Maybe put at least a tiny bit of trust in some of the people who make these incredible tools that we use every day.

-Todd

1 Like

Yes, to agree with Toader in a shorter manner:

Just because you can’t hear it, doesn’t mean it’s not beneficial to the end result, and now the end user result after lossy encoding.

44.1k is technically enough for the human ear, but I personally feel that working at 96k allows my digital and analog tools to perform better and ultimately achieve a better end result.

Then I use quality SRC (Saracon) to reduce to 44.1k and/or 48k as needed.

I would welcome a 64-bit audio engine from WaveLab but I don’t think WaveLab is behind in adding this. REAPER has a 64-bit audio engine but since RX6 is still 32-bit float and I heavily use RX6 as REAPER’s external editor, I still save processed audio files as 32-bit float rather than 64-bit.

This has nothing to do with arogance, but with science versus marketing. Try some articles by Dan Lavry (not the average audio nono), in which he suggests, backed by data, that 192 kHz is possibly worse than 96 kHz - especially in AD converters. I’m not even going into the massive dynamic range 32bit float has, and to which 64bit float adds nothing useful - simple maths. Marketing has its own rules though: ‘If everyone else has it, so must we’, and ‘If 96k is so much better than 44.1, can we go and double that?’.

Although Dan Lavry discusses the optimal sample rate for “recording”, he doesn’t speak much about “processing” at higher rates - but even he recommends 88k or 96k. Here is a quote from one of his papers:

At 60 KHz sampling rate, the contribution of AD and DA to any attenuation in the audible range is negligible. Although 60 KHz would be closer to the ideal; given the existing standards, 88.2 KHz and 96 KHz are closest to the optimal sample rate.. http://www.lavryengineering.com/pdfs/lavry-white-paper-the_optimal_sample_rate_for_quality_audio.pdf

Anyway, regarding sample rates, I know what I hear. Regarding 64-bit processing, the designers have decided to add it for some reason. I’m assuming they’re striving for excellence. I look forward to testing to see if I can actually tell a difference

My guess is that 64 bit float processing is more about systems compatibility and future proofing than “sound”. It’s unlikely to make a difference between whether the record goes platinum or not.

I work at 96kHz now mainly because it’s typically the deliverable for MFiT and for some processing (eg DMG EQ which does not apparently upsample internally) there is a case to make that it also “sounds better”. I do accept that a lot depends on your SRC in terms of the subjective test of what it sounds like at 44.1kHz.

the only thing that is relevant to safety is monitoring volume, this has NOTHING to do with dynamic range or 64bit accuracy

The interest in 64-bit float is not about “headroom” / “dynamic range”…

pro:

  • no need to convert between 32-bit and 64-bit float: 64-bit is needed by some plugins for their internal computations. 64-bit then means small performance gain and no precision lost between succeeding 64-bit plugins.
  • Better audio precision when mixing audio signals. I explain this at the end of this message.
  • If audio devices ever go beyond 24-bit precision, 64-bit float will be needed (because 32-bit float means, in fact, 24-bit precision)

con:

  • requires more memory, which can mean a performance loss (more memory to move). But as soon as a sophisticated plugin is used, this one will likely become the bottleneck, compared to the memory overhead. Therefore, this is a “relative con”.
  • 64-bit CPU instructions are as fast as 32-bit instructions because the CPUs are 64-bit today. But certain rare instructions are faster with 32-bit float because the CPU can conjugate 2 of them while at the same time, only one 64-bit instruction is performed (SIMD).

Now, an explanation about 32-bit float vs 64-bit float for mixing.
While 32-bit float means, in fact, 24-bit precision, 64-bit float means, in fact, 48-bit precision. This means far more precision.
I can illustrate this difference with elementary school maths (this is an analogy of what happens in reality).

  • Let’s say samples can have only values 0, 1, 2, 3, 4, 5,…
  • Let’s start with a sample with the value “3.”
  • An audio gain of “divide by 2”, is applied. We get the value “1.5”, but this value is not allowed hence must be rounded, eg. the new value becomes 1.
  • Later, another gain “multiply by 2” is applied. The new sample becomes “2”.

Consequence: we started from value “3” and ended up with value “2”, while the two gains should have canceled each other.

When this kind of loss is performed multiple times (complex mixing), errors stack up.
The consequence is not dramatic because some errors are (randomly) compensated by others (round-down / round-up), but this compensation actually means “digital fog,” aka noise.

64-bit float processing pushes the digital fog far from the 24-bit domain. Hence a cleaner result at the end of the audio chain.

The difference 32/64 is, therefore, about “audio definition”, if your ears are sensible enough. But that’s another topic!

10 Likes

Thanks for the great explanation PG. I wonder if this is why years ago people thought that SoundBlade by Sonic Solutions had a better playback engine sound?

I had a feeling that the reason would be more about the bigger picture. Just because you can’t hear something directly doesn’t mean it won’t have a bigger affect down the line adding some “digital fog” as you say.

1 Like

Hi Philippe

very good explanation. When you go for the highest quality, you have to go to 64 b. We work with acoustical music and there is it a must to try to reach the highest result. Wavelab could be a very high quality DAW

Good explanation, Philippe, though quite theoretical. If a mix is so complex that the 32/64 difference becomes audible, I think there will be other issues to worry about… And for Wavelab I would think all this is even less relevant.

Thank you for the explanation PG!

I’m assuming 64-bit processing within WaveLab is on the near horizon? :slight_smile:

Short distance horizon

1 Like

Cool :slight_smile:

We do hear the cumulative effect. We may not hear its small constituents in isolation though. A good way to explore the phenomenon is to take a heavily processed mix and compare it to what it used to sound before mixing and processing. The sound of the non-processed tracks is likely to have more clarity and integrity. It’s raw, it’s unfinished but somehow better-defined. Whereas the sound of the mix exhibits a generic kind of smearing that wasn’t there in the beginning. I call it ACE (the audio condom effect).

Somewhere I read it this:



It’s too pointy and floaty. I prefer 64 bit fixed lines.

In terms of audio formats and processing, it’s about noise floor and headroom. Since 32 Bit float has 1600 dB headroom, not a problem (ever). As for noise floor, it can only matter for certain mathematical processes under specific conditions. Filter design is an example, however both FL Studio 32 and 64 Bit, already use extended precision (80 or 64 Bit) where necessary.

It’s certainly not something an end-user needs to worry about and definitely not any use as an export format.



Because they decided to bite the bullet (in terms of code refactoring) and do it for marketing and feel-good purposes. It certainly does no (audio) harm to use full 64 Bit internal processing, but it does no good compared to 32 Bit float either…

WHEN - you use 80 Bit and 64 Bit precision internally where it matters (which FL Studio does). But only where it’s necessary. Again, this is not a real issue. If you use plugins, and everyone does, this is where it matters. And plugin developers who know their stuff all use double precision internally, where it matters, but some don’t too. The DAW is not your concern here. Plugins really are.

We feel no need to shout about it.

But this is another of those no-win conversations with people who don’t understand DSP. Bigger numbers are always better, and if your number is smaller than the other guys. You will feel inferior. My favorite is watching people argue about why a -385 dB noise floor is better than a -144 dB noise floor … while listening to a file with a - 96 dB noise floor on equipment with a -80 dB noise floor > :slight_smile:



Since recording has a specific meaning. There is no 24-bit recording on the planet that makes use of 24 bits. The noise floor of the highest quality audio gear is around 18-20 bits.

PG, whats your opinion about this?

Remember that while mixing you don’t (I hope) keep everything jammed up against digital full-scale all the time, so a few extra bits at the bottom are necessary to allow for that. Also, the problem is not so much the exact number of bits to sufficiently represent a single signal with adequate SNR, but more the accumulation of noise from errors (such as PG described) or dither (which fully compensates for those errors, but at the cost of added noise) at multiple points every time the audio is processed - which can be many, many times in modern practice.

Finally, as PG noted, modern hardware is 64-bit anyway, so there is nothing to be gained by wasting effort advocating for not always using it.

Paul

Thank You for this Explanation PG!

1 Like