Is 32 bit float better than 24bit?

I would like to challenge anyone to have blind A/B test on hearing 0.0000126% distortion (rounding errors of 24-bit processing).

And I also have to remind you that 32-bit floating point doesn’t have more precision than 23-bit fixed-point (when using IEEE floating-point standard with 23-bit mantissa). It’s just gives you the precision within extreamly wide gain-range.