For wheel input with precision, we still prefer exact tracking for now.
May change this in the future based on feedback from mappers, but it
makes little sense to do non-snapped scrolling when input is coming from
a non-precise source.
The previous code would run a calcaulation for the beatmap's own ruleset
if the current one failed. While this does make sense, with the current
way we use this component (and the implementation flow) it is quite unsafe.
The to the call on `.Result` in the `catch` block, this would 100%
deadlock due to the thread concurrency of the `ThreadedTaskScheduler`
being 1. Even if the nested run could be run inline (it should be), the
task scheduler won't even get to the point of checking whether this is
feasible due to it being saturated by the already running task.
I'm not sure if we still need this fallback lookup logic. After removing
it, it's feasible that 0 stars will be returned during the scenario that
previously caused a deadlock, but I don't necessarily think this is
incorrect. There may be another reason for this needing to exist which
I'm not aware of (diffcalc?) but if that's the case we may want to move
the try-catch handling to the point of usage.
To reproduce the deadlock scenario with 100% success (the repro
instructions in the linked issue aren't that simple and require some
patience and good timing), the main portion of the lookup can be changed
to randomly trigger a nested lookup:
```
if (RNG.NextSingle() > 0.5f)
return GetAsync(new
DifficultyCacheLookup(key.Beatmap, key.Beatmap.Ruleset,
key.OrderedMods)).Result;
else
return new StarDifficulty(attributes);
```
After switching beatmap once or twice, pausing debug and viewing the
state of threads should show exactly what is going on.
It turns out this polling was necessary to get extra data that isn't
included in the main listing request. It was removed deemed useless, and
in order to fix the order of rooms changing when selecting a room.
Weirdly, I can't reproduce this happening any more, and on close
inspection of the code can't see how it could happen in the first place.
For now, let's revert this change and iterate from there, if/when the
same issue arises again.
I've discussed avoiding this second poll by potentially including more
data (just `user_id`s?) in the main listing request, but not 100% sure
on this - even if the returned data is minimal it's an extra join
server-side, which could cause performance issues for large numbers of
rooms.