Post by springheeledjak on Feb 15, 2015 1:07:31 GMT
EDIT: wow, I missed like an entire page of discussion. Stupid. Take what you will from my under-informed comments, though I think most of them are still topical.
CS/math/numerical analysis guy here. The problem is likely not with the floating point ops -- it's true that most FPUs produce the same inaccuracies given the exact same binary representation of a number. The issue lies in that word, 'exact': Physics engines are, pretty much by definition, chaotic systems, with or without monte carlo noise added in (i.e. whether or not they're deterministic). Being chaotic systems, they display "SDIC": Sensitive Dependence on Initial Conditions. TL;DR this means that you can't rely on the behavior of the system to be the same if any of the variables involved is at all different, even by the tiniest possible amount. The Lorenz attractor and double pendulum are classic examples of this. Randomness need not even factor into it -- though if the engine is doing some kind of timestep interpolation that relies on the system (or GPU) clock, then yeah, you're going to see all kinds of wacky. System clocks are definitely a random variable, if a low-entropy one.
The fact that IEEE 754 floats are bounded just adds to this mess, because it means that there's a number (apologies for LaTeX notation) called machine precision or machine epsilon, \(\epsilon_m\), for which \(1 + \epsilon_m \equiv 1\). It's not so much that the result of rounding a number is different every time, because it probably isn't: it's that the numbers you're actually seeing in computations are ever so slightly different each time, due to the nature of the system. The rounding errors are also compounded due to machine epsilon/precision loss, especially when doing things like integral or differential approximations via Taylor series or Newton's method. And on top of all that, there's no telling what kinds of weird shortcuts the various linear and elastic solvers in the engine will take if they get stuck.
CS/math/numerical analysis guy here. The problem is likely not with the floating point ops -- it's true that most FPUs produce the same inaccuracies given the exact same binary representation of a number. The issue lies in that word, 'exact': Physics engines are, pretty much by definition, chaotic systems, with or without monte carlo noise added in (i.e. whether or not they're deterministic). Being chaotic systems, they display "SDIC": Sensitive Dependence on Initial Conditions. TL;DR this means that you can't rely on the behavior of the system to be the same if any of the variables involved is at all different, even by the tiniest possible amount. The Lorenz attractor and double pendulum are classic examples of this. Randomness need not even factor into it -- though if the engine is doing some kind of timestep interpolation that relies on the system (or GPU) clock, then yeah, you're going to see all kinds of wacky. System clocks are definitely a random variable, if a low-entropy one.
The fact that IEEE 754 floats are bounded just adds to this mess, because it means that there's a number (apologies for LaTeX notation) called machine precision or machine epsilon, \(\epsilon_m\), for which \(1 + \epsilon_m \equiv 1\). It's not so much that the result of rounding a number is different every time, because it probably isn't: it's that the numbers you're actually seeing in computations are ever so slightly different each time, due to the nature of the system. The rounding errors are also compounded due to machine epsilon/precision loss, especially when doing things like integral or differential approximations via Taylor series or Newton's method. And on top of all that, there's no telling what kinds of weird shortcuts the various linear and elastic solvers in the engine will take if they get stuck.