The previous code would run a calcaulation for the beatmap's own ruleset
if the current one failed. While this does make sense, with the current
way we use this component (and the implementation flow) it is quite unsafe.
The to the call on `.Result` in the `catch` block, this would 100%
deadlock due to the thread concurrency of the `ThreadedTaskScheduler`
being 1. Even if the nested run could be run inline (it should be), the
task scheduler won't even get to the point of checking whether this is
feasible due to it being saturated by the already running task.
I'm not sure if we still need this fallback lookup logic. After removing
it, it's feasible that 0 stars will be returned during the scenario that
previously caused a deadlock, but I don't necessarily think this is
incorrect. There may be another reason for this needing to exist which
I'm not aware of (diffcalc?) but if that's the case we may want to move
the try-catch handling to the point of usage.
To reproduce the deadlock scenario with 100% success (the repro
instructions in the linked issue aren't that simple and require some
patience and good timing), the main portion of the lookup can be changed
to randomly trigger a nested lookup:
```
if (RNG.NextSingle() > 0.5f)
return GetAsync(new
DifficultyCacheLookup(key.Beatmap, key.Beatmap.Ruleset,
key.OrderedMods)).Result;
else
return new StarDifficulty(attributes);
```
After switching beatmap once or twice, pausing debug and viewing the
state of threads should show exactly what is going on.
It turns out this polling was necessary to get extra data that isn't
included in the main listing request. It was removed deemed useless, and
in order to fix the order of rooms changing when selecting a room.
Weirdly, I can't reproduce this happening any more, and on close
inspection of the code can't see how it could happen in the first place.
For now, let's revert this change and iterate from there, if/when the
same issue arises again.
I've discussed avoiding this second poll by potentially including more
data (just `user_id`s?) in the main listing request, but not 100% sure
on this - even if the returned data is minimal it's an extra join
server-side, which could cause performance issues for large numbers of
rooms.
In stable, the type of legacy judgement to show is based on the presence
of particle textures in the skin. We were using the skin version
instead, which turns out to be incorrect and not what some user skins
expect.
Closes#11078.
The previous code would run a calcaulation for the beatmap's own ruleset
if the current one failed. While this does make sense, with the current
way we use this component (and the implementation flow) it is quite unsafe.
The to the call on `.Result` in the `catch` block, this would 100%
deadlock due to the thread concurrency of the `ThreadedTaskScheduler`
being 1. Even if the nested run could be run inline (it should be), the
task scheduler won't even get to the point of checking whether this is
feasible due to it being saturated by the already running task.
I'm not sure if we still need this fallback lookup logic. After removing
it, it's feasible that 0 stars will be returned during the scenario that
previously caused a deadlock, but I don't necessarily think this is
incorrect. There may be another reason for this needing to exist which
I'm not aware of (diffcalc?) but if that's the case we may want to move
the try-catch handling to the point of usage.
To reproduce the deadlock scenario with 100% success (the repro
instructions in the linked issue aren't that simple and require some
patience and good timing), the main portion of the lookup can be changed
to randomly trigger a nested lookup:
```
if (RNG.NextSingle() > 0.5f)
return GetAsync(new
DifficultyCacheLookup(key.Beatmap, key.Beatmap.Ruleset,
key.OrderedMods)).Result;
else
return new StarDifficulty(attributes);
```
After switching beatmap once or twice, pausing debug and viewing the
state of threads should show exactly what is going on.
`onLoadRequested()` always released the `readyClickOperation` ongoing
operation, without checking whether it actually needs to/should (it
should only do so if the action initiating the operation was starting
the game by the host). This would crash all other consumers, who already
released the operation when their ready-up operation completed server
side.
To resolve, relax the constraint such that the operation can be ended
multiple times in any order. At the end of the day the thing that
matters is that the operation is done and the ready button is unblocked.
It turns out this polling was necessary to get extra data that isn't
included in the main listing request. It was removed deemed useless, and
in order to fix the order of rooms changing when selecting a room.
Weirdly, I can't reproduce this happening any more, and on close
inspection of the code can't see how it could happen in the first place.
For now, let's revert this change and iterate from there, if/when the
same issue arises again.
I've discussed avoiding this second poll by potentially including more
data (just `user_id`s?) in the main listing request, but not 100% sure
on this - even if the returned data is minimal it's an extra join
server-side, which could cause performance issues for large numbers of
rooms.
Finishing an operation started via
`OngoingOperationTracker.BeginOperation()` was risky in cases where the
operation ended at a callback on another thread (which, in the case of
multiplayer, is *most* cases). In particular, if any consumer registered
a callback that mutates transforms when the operation ends, it would
result in crashes after the framework-side safety checks.
Rework `OngoingOperationTracker` into an always-present component
residing in the drawable hierarchy, and ensure that the
`operationInProgress` bindable is always updated on the update thread.
This way consumers don't have to add local schedules in multiple places.
Fixes a potential crash when moving from main menu to editor after
having previously opened the login settings overlay. Setting the
activity in BDL as done before is unsafe, as that set can trigger value
change callbacks, which in turn can trigger adding transforms, which
should always be done on the update thread.
Semantically it also makes sense, as the user activity should change
once the screen they're moving to has actually loaded and displayed to
the user.
Culls another non-thread-safe mutation of the `Playing` bindable.
It seems to be a weird vestige from an earlier revision of the old
"direct" panel, which relied on `DisposeOnDeathRemoval` to finish track
playback (and then was removed in
6c150c9ed7). The play button is no longer
responsible for managing preview track lifetime anyway;
`PreviewTrackManager`'s method are intended for that.
Revealed by the framework-side transform thread safety checks. `Stopped`
is even annotated as not being thread-safe (but was annotated as such
long after the class's nascence).
It was added in c9f38f7bb6, which
specified 2021 in another place (and was committed in October of 2020
anyway). Update the year so that it doesn't get culled prematurely.
In theory this seemed like a good idea (and an optimisation in some
cases, due to lower fill rate), but in practice this leads to weird edge
cases.
This aims to do away with the operations on external drawables by
applying a dim to the area behind the `LoadingLayer` when required.
I went over each usage and ensured they look as good or better than
previously.
The specific bad usage here was the restoration of the colour on dispose
(if the `LoadingLayer` was disposed in a still-visible state).
I'm aware that the `BeatmapListingOverlay` will now dim completely during
load. I think this is fine for the time being.
It turns out there's some quite convoluted scheduling / order of
execution requirements of ModSelectOverlay and ModSection. Applying
scheduling causes a runaway condition ending in zero frames after many
mod button changes.
I wanted to avoid rewriting the whole component, so have just moved the
schedule to guard against the part where drawables are actually changed.
There are 3 modes: enabled, limited, and disabled.
The limited mode hides identifiable information such as username, rank, and
(if participating in one) multiplayer lobby name.
Silencing an exception from a task continuation requires accessing
`task.Exception` in any way, which was not done previously if
`logOnError` was false.
To resolve without having to worry whether the compiler will optimise
away a useless access or now, just always log, but switch the logging
level. The unimportant errors will be logged as debug and therefore
essentially silenced on release builds (but could still be potentially
useful in debugging).
For better guarantees that `finalAction` is actually called on the same
screen that `checkCanComplete()` was (uses result of one
`getCurrentScreen()` call throughout instead of calling multiple times).
These can occur when a network connection is completely unavailable (ie.
host resolution failures are occurring). Currently these would appear as
important errors which spammed the notification overlay every retry
forever, while no network connection is available.
I also took this opportunity to remove a lot of `bool` passing which was
no longer in use (previously the fail count / retry process was
different to what we have today).
Looks like this was just copy-paste without any thought into whether it
should exist. It really shouldn't exist.
This is a thing for the playlists system because the *whole system*
there relies on polling the web API to get updated information. In the
case of mutliplayer, we hand off all communications to the realtime
server at the point of joining the rooms.
The argument that this was there to do faster polling on the selection
isn't valid since the polling times were the same for both cases.
Closes#11348.
It needs to be explicitly stated that the users in this list are related
to the *joined* room. Especially since it's sharing its variable name
with `SpectatorStreamingClient` where it has the opposite meaning (is a
list of *globally* playing players).