This class was only used in two places, both on the results screen, but
was holding references to `OsuPlayfield` game-wide (due to unrelated
issues, but still).
Because I can't really think of future use cases for this, and running
the calculation twice at results screen isn't a huge overhead, let's
just do that for now to keep things simple.
We are already manually calling `base.UpdateSubTree` when we need to.
Changing this flag is doing nothing and just adds to the complexity of
the implementation.
Fixes issue that occurs on *about* 246 beatmaps and was first described
by me on discord:
https://discord.com/channels/188630481301012481/188630652340404224/1154367700378865715
and then rediscovered again during work on
https://github.com/ppy/osu/pull/26405:
https://gist.github.com/bdach/414d5289f65b0399fa8f9732245a4f7c#venenog-on-ultmate-end-by-blacky-overdose-631
It so happens that in stable, due to .NET Framework internals, float
math would be performed using x87 registers and opcodes.
.NET (Core) however uses SSE instructions on 32- and 64-bit words.
x87 registers are _80 bits_ wide. Which is notably wider than _both_
float and double. Therefore, on a significant number of beatmaps,
the rounding would not produce correct values due to insufficient
precision.
See following gist for corroboration of the above:
https://gist.github.com/bdach/dcde58d5a3607b0408faa3aa2b67bf10
Thus, to crudely - but, seemingly accurately, after checking across
all ranked maps - emulate this, use `decimal`, which is slow, but has
bigger precision than `double`. The single known exception beatmap
in whose case this results in an incorrect result is
https://osu.ppy.sh/beatmapsets/1156087#osu/2625853
which is considered an "acceptable casualty" of sorts.
Doing this requires some fooling of the compiler / runtime (see second
inline comment in new method). To corroborate that this is required,
you can try the following code snippet:
Console.WriteLine(string.Join(' ', BitConverter.GetBytes(1.3f).Select(x => x.ToString("X2"))));
Console.WriteLine(string.Join(' ', BitConverter.GetBytes(1.3).Select(x => x.ToString("X2"))));
Console.WriteLine();
decimal d1 = (decimal)1.3f;
decimal d2 = (decimal)1.3;
decimal d3 = (decimal)(double)1.3f;
Console.WriteLine(string.Join(' ', decimal.GetBits(d1).SelectMany(BitConverter.GetBytes).Select(x => x.ToString("X2"))));
Console.WriteLine(string.Join(' ', decimal.GetBits(d2).SelectMany(BitConverter.GetBytes).Select(x => x.ToString("X2"))));
Console.WriteLine(string.Join(' ', decimal.GetBits(d3).SelectMany(BitConverter.GetBytes).Select(x => x.ToString("X2"))));
which will print
66 66 A6 3F
CD CC CC CC CC CC F4 3F
0D 00 00 00 00 00 00 00 00 00 00 00 00 00 01 00
0D 00 00 00 00 00 00 00 00 00 00 00 00 00 01 00
8C 5D 89 FB 3B 76 00 00 00 00 00 00 00 00 0E 00
Note that despite `d1` being converted from a less-precise floating-
-point value than `d2`, it still is represented 100% accurately as
a decimal number.
After applying this change, recomputation of legacy scoring attributes
for *all* rulesets will be required.