jono not bono wrote:
Wow. I really enjoyed that video. Clears up a lot of opinionated Dogs Brown.
So, what are the actual pros and cons of recording at higher sample rates (bearing in mind I can't hear any difference above 44.1khz)? With Bitrate, I'm more than happy with 24bit but could someone explain what the benefits of using 32bit Float are? I've never understood it!
First - I didn't author the article and have no connection to it.
Second - It's not wrong. Nor garbage to be wholly dismissed as others claimed. It is what I said, a good place to START.
As for the 24bit 32bit float difference:
To sum it up in one line without the pages of technical engineering data:
32 bit float allows for processing or storage of a higher dB range (Dynamic Range) before being subject to waveform truncation ('clip').
With 32bit float, using the 6dB per bit rule, you end up with about 1530dB dynamic range.
24 bit is just over 144 dB.
Now how is this useful in processing/Mixing?
SOUND is the word we use for the translation of a waveform (or more appropriately to the topic of mixing MANY INTERACTING waveforms of different frequency)
Take 10 complex frequency sources at 0dB and sum them to your 2-channel Buss.
At your 2-channel Buss you will be well over the 0dBFS limitation and your 24bit project will be a clipped, aliased and distorted mess.
But this is not the case with 32 bit float!
32 bit float allows for the dynamic range to be extended just beyond 1500dB - more than enough to handle the processing of the sources.
Now you can't directly translate that 32bit float out through a 24bit D/A process but you can process it and store it digitally at 32bit float.
So if you saved the processing you are doing in a 'Pulse-Code Modulated' (xPCM) format like a WAV file at 24 bit you would have a clipped, aliased and distorted mess.
if you saved the processing you are doing in a WAV file at 32 bit float you would have the APPEARANCE of a clipped, aliased and distorted mess! Examination of the visual representation of the wave would look like a big fat sausage or block.
A simple "Normalize to 0dB" would restore the entire stored encoded waveform back to listenable without it being a clipped, aliased and distorted mess.
So where it matters internally is that you can drive the gain of a source , or summed sources, well into the RED (above 0dB) and enjoy all the many benefits and effects it has on your other sources while being summed or processed.
Does that make the concept a little less muddy?
(probably not! but the more you learn about it and experiment with it the clearer it will become!)
Cubase Pro 10.0.5, FL Studio 20, Ableton Live Suite 10, Harrison Mixbus 32c, UAD Apollo x series, UR28m, SSL, Native Instruments Komplete ultimate, NI Maschine Studio, Xfer Records Serum, Lennar Digital Sylenth1, reFx Nexus2, Reveal Sound Spire, FabFilter, Soundtoys, Lexicon PCM, Sonarworks, Slate Digital, Izotope, Brainworx, SPL, Waves, Cableguys, Cytomic, MeldaProductions, AOM, IK Multimedia, SynchroArts Revoice Pro, DDMF, Boz Digital, Antares Autotune, a bunch of other obscure stuff, TBs of samples, too much hardware to list... PC Windows 10 Pro 1803 64bit, i7-5960x (8 core), Asus x99 Deluxe ii, GeForce 1070 strix, Fractal Design Silent RL2 case, Noctua NH-d15s, 64 GB DDR4 g,Skillz Trident Z 3200 RAM, 512gb Samsung 950 m.2, 3TB segate Ironwolf NAS HDD, 4TB WDRed, 2TB WD Black, Laptop: MSI Ghost pro GS60 6QE i7-6700 Skylake Win 10 64bit, Storage: 32 TB QNAP NAS Raid 50, 12 TB QNAP RAID 1