In order for the new star difficulty to be shown to users on the next
release.
catch's difficulty calculator version is not bumped because the only
catch change pending deploy is https://github.com/ppy/osu/pull/28353 and
that affects performance points only.
Closes https://github.com/ppy/osu/issues/29832.
The underlying reason for the incorrect sample playback was an equality
comparer failure.
Samples are contained in several pools which are managed by the
playfield. In particular, the pools are keyed by `ISampleInfo`
instances. This means that for correct operation, `ISampleInfo` has to
implement `IEquatable<ISampleInfo>` and also provide an appropriately
correct `GetHashCode()` implementation. Different audible samples must
not compare equal to each other when represented by `ISampleInfo`.
As it turns out, `VolumeAwareHitSampleInfo` failed on this, due to not
overriding equality members. Therefore, a `new
HitSampleInfo(HitSampleInfo.HIT_NORMAL, HitSampleInfo.BANK_NORMAL,
volume: 70)` was allowed to compare equal to a
`VolumeAwareHitSampleInfo` wrapping it, *even though they correspond to
completely different sounds and go through entirely different lookup
path sequences*.
Therefore, to fix, provide more proper equality implementations for
`VolumeAwareHitSampleInfo`.
When testing note that this issue *only occurs immediately after
placing an object*. Saving and re-entering editor makes this issue go
away. I haven't looked too long into why, but the general gist of it is
ordering; it appears that a `normal-hitnormal` pool exists at point
of query of a new object placement, but does not seem to exist when
entering editor afresh. That said I'm not sure that ordering aspect of
this bug matters much if at all, since the two `IHitSampleInfo`s should
never be allowed to alias with each other at all wrt equality.
Closes https://github.com/ppy/osu/issues/28989.
Because swell ticks are judged manually by their parenting objects,
swell ticks were not given a start time (with the thinking that there
isn't really one *to* give). This tripped up the "judge past objects"
logic in `EditorPlayer`, since it would enumerate all objects
(regardless of nesting) that are prior to current time and mark them as
judged. With all swell ticks having the default start time of 0 they
would get judged more often than not, leading to behaviour weirdness.
To resolve, give swell ticks a *relatively* sane start time equal to
the start time of the swell itself.
Before I go with a hammer to redesign these, I want to remove stuff that
does nothing first.
Hard-breaks API to allow rulesets to specify an enumerable of custom
sections rather than two specific weird ones.
For specific rulesets:
- osu!:
- Stack leniency slider merged into difficulty section.
- osu!taiko:
- Approach rate and circle size sliders removed.
- Colours section removed.
- osu!catch:
- No functional changes.
- osu!mania:
- Special style toggle merged into difficulty section.
- Colours section removed.
Closes https://github.com/ppy/osu/issues/28369.
The reporter of the issue was incorrect; it's not the beat snap grid
that is causing the problem, it's something far stupider than that.
When the current selection changes,
`EditorSelectionHandler.UpdateTernaryStates()` is supposed to update the
state of ternary bindables to reflect the reality of the current
selection. This in turn will fire bindable change callbacks for said
ternary toggles, which heavily use `EditorBeatmap.PerformOnSelection()`.
The thing about that method is that it will attempt to check whether any
changes were actually made to avoid producing empty undo states, *but*
to do this, it must *serialise out the entire beatmap to a stream* and
then *binary equality check that* to determine whether any changes were
actually made:
7b14c77e43/osu.Game/Screens/Edit/EditorChangeHandler.cs (L65-L69)
As goes without saying, this is very expensive and unnecessary, which
leads to stuff like keeping a selection box active while a taiko beatmap
is playing under it dog slow. So to attempt to mitigate that, add
precondition checks to every single ternary callback of this sort to
avoid this serialisation overhead.
And yes, those precondition checks use linq, and that is *still* faster
than not having them.
Reported in https://discord.com/channels/188630481301012481/1097318920991559880/1221836384038551613.
Example score: https://osu.ppy.sh/scores/1855965185
The cause of the overestimation was an error in taiko's score simulator.
In lazer taiko, swell ticks don't give any score anymore, while they did
in stable.
For all intents and purposes, swell ticks can be considered "bonus"
objects that "don't give any actual bonus score". Which is to say,
during simulation of a legacy score swell ticks hit should be treated
as bonus, because if they aren't, then otherwise they will be treated
essentially as *normal hits*, meaning that they will be included in
the *accuracy* portion of score, which breaks all sorts of follow-up
assumptions:
- The accuracy portion of the best possible total score becomes
overinflated in comparison to reality, while the combo portion of
that maximum score becomes underestimated.
- Because the affected score has low accuracy, the estimated accuracy
portion of the score (as given by maximmum accuracy portion of score
times the actual numerical accuracy of the score) is also low.
- However, the next step is estimating the combo portion, which is done
by taking legacy total score, subtracting the aforementioned
estimation for accuracy total score from that, and then dividing
the result by the maximum achievable combo score on the map. Because
most of actual "combo" score from swell ticks was "moved" into the
accuracy portion due to the aforementioned error, the maximum
achievable combo score becomes so small that the estimated combo
portion exceeds 1.
Instead, this change makes it so that gains from swell ticks are treated
as "bonus", which means that they are excluded from the accuracy portion
of score and instead count into the bonus portion of score, bringing the
scores concerned more in line with expectations - although due to
pessimistic assumptions in the simulation of the swell itself,
the conversion will still overestimate total score for affected scores,
just not by *that* much.