The UAD myth?

Wow, I never thought of that, but it makes sense when you put it that way.

I’m wondering why I never noticed any phasiness, maybe I will now that I’m looking for it?

OR … is the latency of a few samples irrelevant because the signal is being processed more than delayed … and so it’s different enough from the source that there aren’t any dysphonic interactions (wave cancellations/additions)?

Newbie thoughts, I know, but could someone shed some insight please?

Thanks!