Also the term "sample accurate" is a bit of a misnomer. When folks use the term they are using it to describe the precision of the automation and not really the accuracy.
Imagine we have automation that has a data point at every sample point. Then the precision of your automation would be the same as your sample rate. Now if you playback and read that automation and each automation data point is located exactly where it was when the automation was written (i.e the automation data point occurs at exactly the same time as the sample data point that was playing at the time of writing the automation). If that occurs then your accuracy will be 100%. But lets imagine that the automation data point occurs a full second after (or before) its corresponding sample data point. In that case the accuracy sucks even though your precision is still at the sample rate.
Here's a second scenario. Your automation only has one data point for every 1,000 samples - so it a thousand times less precise. But if each automation data point occurs exactly at the same time as its corresponding sample data point, then it too has an accuracy of 100% even though the precision is different. And likewise if it is late or early by a second the accuracy is just as bad as in the first scenario.
Now in the real world you can never get 100% accurate, so the real issue is how off can you be before you can hear the difference. If the automation data point is late by 10, 20, or 30 sample data points it is still so close you can't hear the difference. But as you increase that number you eventually will be able to hear that difference. That is the number that is important - lets call it the significant delay (and it could be a positive or negative value). It describes how inaccurate it can be before it matters. I don't know what that amount is, but I'm sure it has been studied and expect Steinberg to have read the material (but who knows).
If the inaccuracy is less than this significant delay it will sound like it is 100% accurate even though it isn't. Additionally there is also the issue of drift. If this delay was consistently off by a constant amount the software could compensate for it. But if it is sometimes drifts off by 2,000 samples and other times by 10,000 that can cause a whole new layer of problems because some automation data points delay might be less than the significant delay and others greater than it. When that happens your automation will never sound consistent, it will vary on different playbacks.
Finally the reason you might not want sample precise automation data is it is going to generate a lot of unnecessary data points. Processing these in turn will consume more of your computing resources, most of it wasted. This in turn leads to lower track, plug-in counts, etc. because you run out of resources sooner. Increasing the precision beyond what is needed can actually reduce the accuracy because of processing delays that occur because of the excess computing load.