-
-
Notifications
You must be signed in to change notification settings - Fork 20
-
"I've created a system for ranking my board game matches using the Plackett-Luce model. I want to know if the behavior I'm observing is expected. As I understand it, the algorithm assigns the win percentage and conservative skill gain based on the expected result and the level of surprise.
In my ranking, there are players with different numbers of matches and skill levels. I use a match simulator, which, given the current ranking, simulates a match. However, adding more players to the match reduces the conservative skill gain for the first-place player. Here are some examples:
Match with 4 players
Spainer (Rank 1): ΔMu = 0.203, ΔSigma = 0.001, ΔSkill = 0.200
Cris (Rank 2): ΔMu = 0.408, ΔSigma = -0.007, ΔSkill = 0.429
Rubén (Rank 3): ΔMu = -0.189, ΔSigma = -0.003, ΔSkill = -0.178
Maca (Rank 4): ΔMu = -0.760, ΔSigma = -0.027, ΔSkill = -0.679
Match with 3 players
Spainer (Rank 1): ΔMu = 0.215, ΔSigma = 0.000, ΔSkill = 0.215
Cris (Rank 2): ΔMu = 0.425, ΔSigma = -0.014, ΔSkill = 0.466
Rubén (Rank 3): ΔMu = -0.509, ΔSigma = -0.004, ΔSkill = -0.497
Observation:
Spainer earns more conservative skill points in the 3-player match than in the 4-player match, despite having a higher win percentage in the 3-player game (41%) compared to the 4-player game (33%).
If I increase the number of players to 9, the win probability drops to 13%, but the conservative skill gain decreases even further to 0.150.
Question:
Is this the expected behavior of the algorithm? Why does adding more players result in a lower skill gain for the top-ranked player?"
Beta Was this translation helpful? Give feedback.
All reactions
Replies: 6 comments 13 replies
-
Can you please confirm if this is an issue in version 4.0.0? I think a bug was introduced since then and want to see if that's related.
See #155
Beta Was this translation helpful? Give feedback.
All reactions
-
Hi!
Now that you mention it, I think I'm also experiencing anomalous behavior with ties. A player gains less conservative skill when tying for first place than when finishing second alone, which doesn’t make much sense. Here's an example:
Tied for 1st place:
Rubén (Rank 1): ΔMu = 0.059, ΔSigma = 0.000, ΔSkill = 0.059
Spainer (Rank 1): ΔMu = 0.056, ΔSigma = 0.001, ΔSkill = 0.052
Cris (Rank 3): ΔMu = 0.242, ΔSigma = -0.008, ΔSkill = 0.267
Maca (Rank 4): ΔMu = -0.677, ΔSigma = -0.019, ΔSkill = -0.621
Second place alone:
Spainer (Rank 1): ΔMu = 0.203, ΔSigma = 0.001, ΔSkill = 0.200
Rubén (Rank 2): ΔMu = 0.060, ΔSigma = -0.002, ΔSkill = 0.066
Cris (Rank 3): ΔMu = 0.073, ΔSigma = -0.013, ΔSkill = 0.112
Maca (Rank 4): ΔMu = -0.929, ΔSigma = -0.028, ΔSkill = -0.845
Rubén benefits more from finishing second alone than from sharing first place with Spainer, which seems counterintuitive.
I'm attaching my OpenSkill version for reference:
Name: openskill
Version: 6.0.1
Summary: Multiplayer Rating System. No Friction.
Home-page:
Author:
Author-email: Vivek Joshy vivekjoshy97@gmail.com
License: MIT
Let me know what you think!
Beta Was this translation helpful? Give feedback.
All reactions
-
Hi!
Have you identified this behavior as a bug? What is the solution for now, using a model other than Plackett-Luce?
Thanks
Beta Was this translation helpful? Give feedback.
All reactions
-
Can you please pip install openskill==6.1.0a0 and see if it fixes your issue? I released an alpha to make sure it's working as intended.
If it's not fixing the issue, please also provide a simple reproduction script. Thanks :)
Beta Was this translation helpful? Give feedback.
All reactions
-
Can you please
pip install openskill==6.1.0a0and see if it fixes your issue? I released an alpha to make sure it's working as intended.If it's not fixing the issue, please also provide a simple reproduction script. Thanks :)
Hi!
It looks like the issue with ties has been resolved. (All the data I've shared is using the _predictwin method.)
New results:
Tied for 1st place:
Rubén (Rank 1): ΔMu = 0.247, ΔSigma = 0.000, ΔSkill = 0.247
Spainer (Rank 1): ΔMu = 0.204, ΔSigma = 0.001, ΔSkill = 0.201
Cris (Rank 3): ΔMu = 0.251, ΔSigma = -0.009, ΔSkill = 0.277
Maca (Rank 4): ΔMu = -0.696, ΔSigma = -0.020, ΔSkill = -0.637
Solo 1st place:
Rubén (Rank 1): ΔMu = 0.247, ΔSigma = 0.000, ΔSkill = 0.247
Spainer (Rank 2): ΔMu = 0.063, ΔSigma = -0.000, ΔSkill = 0.063
Cris (Rank 3): ΔMu = 0.072, ΔSigma = -0.014, ΔSkill = 0.113
Maca (Rank 4): ΔMu = -0.967, ΔSigma = -0.030, ΔSkill = -0.877
What I have noticed in this new version is that the ranking has changed significantly, considering that in my system, ties are very rare. Now, the differences between players are much smaller.
In version 6.0.1, the ranking was:
In version 6.1.0a0a, the ranking is:
As you can see, the top-ranked players have lost conservative skill, while mid-ranked players have gained a lot. I'm mentioning this because I'm not sure if this is the expected behavior or if something might have broken. I've also noticed a general increase in Mu values.
On the other hand, the loss of conservative skill when adding more players to a match has not changed, and the issue persists.
What do you need exactly to perform proper troubleshooting?
Thanks!
Beta Was this translation helpful? Give feedback.
All reactions
-
On the other hand, the loss of conservative skill when adding more players to a match has not changed, and the issue persists.
I'm not sure this is a bug. If we generalized this to a 100 players per team, because of the massive team dynamics, it becomes more uncertain which players did better in a team. Maybe just 1 or 2 out of a 100 players carried everyone else. Because of this the rating updates become less volatile with larger teams.
However, if this assumption does not hold for your data you can modify beta to get the result you want.
Here are some results for 2 team variants with different number of player for each team:-
beta=25/6 (default):
Average Ordinal Change vs Team Size ┏━━━━━━━━━━━┳━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━┳━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━┓ ┃ Team Size ┃ Winner Average Ordinal Change ┃ Loser Average Ordinal Change ┃ ┡━━━━━━━━━━━╇━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━╇━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━┩ │ 1 │ 3.4377 │ -1.8331 │ │ 2 │ 2.4304 │ -1.4982 │ │ 3 │ 1.9619 │ -1.3068 │ │ 4 │ 1.6815 │ -1.1770 │ │ 5 │ 1.4909 │ -1.0809 │ │ 6 │ 1.3511 │ -1.0060 │ │ 7 │ 1.2432 │ -0.9453 │ │ 8 │ 1.1568 │ -0.8948 │ │ 9 │ 1.0856 │ -0.8519 │ │ 10 │ 1.0258 │ -0.8149 │ └───────────┴───────────────────────────────┴──────────────────────────────┘
beta=25/3:
Average Ordinal Change vs Team Size ┏━━━━━━━━━━━┳━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━┳━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━┓ ┃ Team Size ┃ Winner Average Ordinal Change ┃ Loser Average Ordinal Change ┃ ┡━━━━━━━━━━━╇━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━╇━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━┩ │ 1 │ 2.4760 │ -1.6910 │ │ 2 │ 2.0025 │ -1.3998 │ │ 3 │ 1.7124 │ -1.2341 │ │ 4 │ 1.5149 │ -1.1205 │ │ 5 │ 1.3703 │ -1.0355 │ │ 6 │ 1.2590 │ -0.9683 │ │ 7 │ 1.1700 │ -0.9134 │ │ 8 │ 1.0969 │ -0.8674 │ │ 9 │ 1.0355 │ -0.8280 │ │ 10 │ 0.9831 │ -0.7937 │ └───────────┴───────────────────────────────┴──────────────────────────────┘
beta=25:
Average Ordinal Change vs Team Size ┏━━━━━━━━━━━┳━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━┳━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━┓ ┃ Team Size ┃ Winner Average Ordinal Change ┃ Loser Average Ordinal Change ┃ ┡━━━━━━━━━━━╇━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━╇━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━┩ │ 1 │ 0.9655 │ -0.8981 │ │ 2 │ 0.9300 │ -0.8468 │ │ 3 │ 0.8954 │ -0.8058 │ │ 4 │ 0.8632 │ -0.7713 │ │ 5 │ 0.8335 │ -0.7415 │ │ 6 │ 0.8062 │ -0.7154 │ │ 7 │ 0.7811 │ -0.6922 │ │ 8 │ 0.7580 │ -0.6713 │ │ 9 │ 0.7367 │ -0.6523 │ │ 10 │ 0.7169 │ -0.6350 │ └───────────┴───────────────────────────────┴──────────────────────────────┘
Beta Was this translation helpful? Give feedback.
All reactions
-
Hi!
Thanks for the analysis. Maybe the issue is how I have it configured, as you mentioned.
To give some context, my system is based on individual players. Occasionally, there are teams, but 95% of the matches involve individual players.
So, the question is: What is more likely, for a player to win a match with 20 players or with 5? Assuming that all players have similar skill levels.
It feels counterintuitive that the predict_win output shows the player's win percentage decreasing (indicating that their victory is more difficult), yet this isn’t reflected in an increase in skill points. Instead, it even penalizes them.
On another note, have you checked the ranking differences between the new version and the previous one? I ask because maybe while fixing the tie issue, something else might have broken
Beta Was this translation helpful? Give feedback.
All reactions
-
**"Hi!
I've been testing different beta values, but I’m not convinced by the results. After rereading the conversation, I think there may have been a misunderstanding. As I mentioned, my matches involve individual players, not teams.
Let’s imagine a Battle Royale game like Fortnite. Is it easier to win a match with 20 players or with 100? (Assuming equal skill levels.)
What’s happening is that the system is penalizing the winner in the match with 100 players. Here are some examples:
Match with 8 players (various skill levels):
Cris: Win probability = 8.55%
Spainer: Win probability = 16.61%
Anxo: Win probability = 18.49%
Mixo: Win probability = 6.48%
Spawn: Win probability = 17.30%
Shuku: Win probability = 6.96%
Maca: Win probability = 8.14%
Rubén: Win probability = 17.47%
Skill changes after the match:
Mixo (Rank 1): ΔMu = 0.613, ΔSigma = -0.000, ΔSkill = 0.613
Spainer (Rank 2): ΔMu = 0.134, ΔSigma = 0.002, ΔSkill = 0.128
Cris (Rank 3): ΔMu = 0.305, ΔSigma = -0.001, ΔSkill = 0.308
Anxo (Rank 4): ΔMu = 0.380, ΔSigma = -0.154, ΔSkill = 0.842
Spawn (Rank 5): ΔMu = -0.013, ΔSigma = 0.000, ΔSkill = -0.013
Shuku (Rank 6): ΔMu = 0.031, ΔSigma = -0.001, ΔSkill = 0.034
Maca (Rank 7): ΔMu = -0.212, ΔSigma = -0.012, ΔSkill = -0.175
Rubén (Rank 8): ΔMu = -0.522, ΔSigma = -0.001, ΔSkill = -0.520
Mixo gains 0.613 conservative skill with a 6.48% win probability.
Match with 4 players:
Cris: Win probability = 16.05%
Spainer: Win probability = 34.21%
Anxo: Win probability = 38.46%
Mixo: Win probability = 11.28%
Skill changes after the match:
Mixo (Rank 1): ΔMu = 0.745, ΔSigma = -0.003, ΔSkill = 0.755
Spainer (Rank 2): ΔMu = 0.089, ΔSigma = 0.001, ΔSkill = 0.087
Cris (Rank 3): ΔMu = 0.174, ΔSigma = -0.007, ΔSkill = 0.196
Anxo (Rank 4): ΔMu = -5.071, ΔSigma = -0.369, ΔSkill = -3.964
Mixo gains 0.755 conservative skill with an 11.28% win probability.
Issue:
Why does Mixo gain fewer points in the 8-player match? Winning should be more valuable when there are more opponents, since his win probability is lower, meaning the outcome is more surprising. His victory should be rewarded more, not less.
Any insights into this behavior?
Best regards!"**
Beta Was this translation helpful? Give feedback.
All reactions
-
Please provide a reproduction script. I can't reproduce this on my end:
4-Player Battle Royale
┏━━━━━━┳━━━━━━━━━━━┳━━━━━━━━━━━┳━━━━━━━━━━┳━━━━━━━━━┳━━━━━━━━━┳━━━━━━━━┳━━━━━━━━┳━━━━━━━━┓
┃ Rank ┃ Initial μ ┃ Initial σ ┃ Win Prob ┃ Final μ ┃ Final σ ┃ Δμ ┃ Δσ ┃ ΔSkill ┃
┡━━━━━━╇━━━━━━━━━━━╇━━━━━━━━━━━╇━━━━━━━━━━╇━━━━━━━━━╇━━━━━━━━━╇━━━━━━━━╇━━━━━━━━╇━━━━━━━━┩
│ 1 │ 22.000 │ 14.000 │ 0.179 │ 29.081 │ 13.718 │ +7.081 │ -0.282 │ +7.926 │
│ 2 │ 30.000 │ 8.000 │ 0.299 │ 31.043 │ 7.919 │ +1.043 │ -0.081 │ +1.286 │
│ 3 │ 32.000 │ 6.500 │ 0.339 │ 31.432 │ 6.444 │ -0.568 │ -0.056 │ -0.399 │
│ 4 │ 23.000 │ 11.000 │ 0.183 │ 18.282 │ 10.594 │ -4.718 │ -0.406 │ -3.500 │
└──────┴───────────┴───────────┴──────────┴─────────┴─────────┴────────┴────────┴────────┘
8-Player Battle Royale
┏━━━━━━┳━━━━━━━━━━━┳━━━━━━━━━━━┳━━━━━━━━━━┳━━━━━━━━━┳━━━━━━━━━┳━━━━━━━━┳━━━━━━━━┳━━━━━━━━┓
┃ Rank ┃ Initial μ ┃ Initial σ ┃ Win Prob ┃ Final μ ┃ Final σ ┃ Δμ ┃ Δσ ┃ ΔSkill ┃
┡━━━━━━╇━━━━━━━━━━━╇━━━━━━━━━━━╇━━━━━━━━━━╇━━━━━━━━━╇━━━━━━━━━╇━━━━━━━━╇━━━━━━━━╇━━━━━━━━┩
│ 1 │ 21.000 │ 7.000 │ 0.056 │ 22.560 │ 6.996 │ +1.560 │ -0.004 │ +1.570 │
│ 2 │ 38.000 │ 6.500 │ 0.180 │ 38.984 │ 6.490 │ +0.984 │ -0.010 │ +1.014 │
│ 3 │ 36.000 │ 8.500 │ 0.163 │ 37.245 │ 8.454 │ +1.245 │ -0.046 │ +1.384 │
│ 4 │ 33.000 │ 12.000 │ 0.140 │ 34.631 │ 11.753 │ +1.631 │ -0.247 │ +2.373 │
│ 5 │ 22.000 │ 9.000 │ 0.066 │ 22.984 │ 8.922 │ +0.984 │ -0.078 │ +1.218 │
│ 6 │ 22.000 │ 14.000 │ 0.075 │ 22.758 │ 13.383 │ +0.758 │ -0.617 │ +2.611 │
│ 7 │ 33.000 │ 6.500 │ 0.142 │ 31.878 │ 6.454 │ -1.122 │ -0.046 │ -0.983 │
│ 8 │ 38.000 │ 7.500 │ 0.179 │ 33.885 │ 7.409 │ -4.115 │ -0.091 │ -3.843 │
└──────┴───────────┴───────────┴──────────┴─────────┴─────────┴────────┴────────┴────────┘
16-Player Battle Royale
┏━━━━━━┳━━━━━━━━━━━┳━━━━━━━━━━━┳━━━━━━━━━━┳━━━━━━━━━┳━━━━━━━━━┳━━━━━━━━┳━━━━━━━━┳━━━━━━━━┓
┃ Rank ┃ Initial μ ┃ Initial σ ┃ Win Prob ┃ Final μ ┃ Final σ ┃ Δμ ┃ Δσ ┃ ΔSkill ┃
┡━━━━━━╇━━━━━━━━━━━╇━━━━━━━━━━━╇━━━━━━━━━━╇━━━━━━━━━╇━━━━━━━━━╇━━━━━━━━╇━━━━━━━━╇━━━━━━━━┩
│ 1 │ 21.000 │ 9.000 │ 0.030 │ 22.723 │ 8.999 │ +1.723 │ -0.001 │ +1.727 │
│ 2 │ 27.000 │ 14.500 │ 0.050 │ 31.163 │ 14.473 │ +4.163 │ -0.027 │ +4.242 │
│ 3 │ 21.000 │ 14.500 │ 0.035 │ 24.974 │ 14.464 │ +3.974 │ -0.036 │ +4.082 │
│ 4 │ 38.000 │ 11.500 │ 0.080 │ 40.038 │ 11.472 │ +2.038 │ -0.028 │ +2.121 │
│ 5 │ 21.000 │ 14.000 │ 0.035 │ 24.174 │ 13.944 │ +3.174 │ -0.056 │ +3.342 │
│ 6 │ 24.000 │ 10.000 │ 0.038 │ 25.416 │ 9.981 │ +1.416 │ -0.019 │ +1.473 │
│ 7 │ 33.000 │ 8.000 │ 0.065 │ 33.653 │ 7.989 │ +0.653 │ -0.011 │ +0.686 │
│ 8 │ 37.000 │ 7.500 │ 0.079 │ 37.366 │ 7.489 │ +0.366 │ -0.011 │ +0.399 │
│ 9 │ 38.000 │ 10.000 │ 0.081 │ 38.318 │ 9.957 │ +0.318 │ -0.043 │ +0.447 │
│ 10 │ 37.000 │ 8.333 │ 0.079 │ 37.018 │ 8.310 │ +0.018 │ -0.023 │ +0.089 │
│ 11 │ 23.000 │ 14.500 │ 0.040 │ 23.701 │ 14.306 │ +0.701 │ -0.194 │ +1.282 │
│ 12 │ 38.000 │ 8.500 │ 0.082 │ 37.360 │ 8.465 │ -0.640 │ -0.035 │ -0.535 │
│ 13 │ 31.000 │ 7.500 │ 0.058 │ 30.477 │ 7.479 │ -0.523 │ -0.021 │ -0.459 │
│ 14 │ 37.000 │ 7.000 │ 0.079 │ 35.965 │ 6.979 │ -1.035 │ -0.021 │ -0.972 │
│ 15 │ 38.000 │ 6.500 │ 0.083 │ 36.599 │ 6.482 │ -1.401 │ -0.018 │ -1.347 │
│ 16 │ 39.000 │ 8.500 │ 0.085 │ 34.897 │ 8.445 │ -4.103 │ -0.055 │ -3.937 │
└──────┴───────────┴───────────┴──────────┴─────────┴─────────┴────────┴────────┴────────┘
Notice how rank 1 is always lowest probability of winning. Now compare rank 1 skill gain of 8 single player teams to 16 single player teams.
In 8 team game: (21 - 3 * 7) = 0 ordinal + 1.570
In 16 team game: (21 - 3 * 9) = -6 ordinal + 1.727
So as you can see, ordinal does go up higher even if win probability is lower for that person.
Beta Was this translation helpful? Give feedback.
All reactions
-
Hi,
I've been reviewing everything, and the way I have it programmed right now, I can’t provide a script example at the moment. Also, after reading your response, I think I might be misunderstanding how the Plackett-Luce model works compared to what I actually need...
Looking at your reproduction, it's not the same scenario that I'm presenting. To reproduce it correctly on your end, you would need to use the same players in each example and ensure that Mu and Sigma remain unchanged.
If you run a 4-player match, those same 4 players should retain their values when expanding the test to 8 or 16 players.
In my examples, I keep the same players without changing their final ranks—I simply add new players to the match. So, for accurate simulations, they should be performed with the same players.
Moreover, if you compare your 4-player example, the winner gains significantly more points compared to the 16-player match.
If we scaled this up to 100 players, what would happen? The difference between 1st and 2nd place would become almost negligible. Finishing 1st or 2nd wouldn’t matter, which seems counterintuitive
Regards!
Beta Was this translation helpful? Give feedback.
All reactions
-
Hi!
I'm attaching a test script with current player data from my database.
import math
import copy
from openskill.models.weng_lin.plackett_luce import PlackettLuce, PlackettLuceRating
# Instantiate the model (adjust parameters if desired)
pl_model = PlackettLuce()
# Define the base ratings (mu, sigma) according to your table
base_ratings = {
"Spawn": (34.981, 1.921),
"Spainer": (34.155, 1.675),
"Rubén": (34.57, 1.901),
"Ines": (37.643, 3.124),
"Manwa": (34.07, 2.323),
"Cris": (31.255, 2.461),
"Elias": (29.951, 2.446),
"Estrada": (29.313, 2.466),
"Duna": (30.869, 3.253),
"Shuku": (27.181, 2.155),
"Maca": (29.102, 3.035),
}
def simulate_scenario(scenario_name, participants, ranks):
"""
Creates copies of the base ratings and simulates the given match.
- participants: list with the names in the order you want to form teams.
- ranks: list of positions (1 = first, 2 = second, etc.)
"""
print(f"\n=== Simulation: {scenario_name} ===")
# 1. Copy the initial ratings so as NOT to affect the global state
scenario_players = {}
for name, (mu, sigma) in base_ratings.items():
scenario_players[name] = pl_model.rating(mu=mu, sigma=sigma, name=name)
# 2. Create "teams" (1 player per team)
teams = [[scenario_players[name]] for name in participants]
# 3. Compute the probability of winning before the match result
win_probabilities = pl_model.predict_win(teams)
print("Winning probability (prior to the result):")
for i, name in enumerate(participants):
print(f" {name}: {win_probabilities[i]*100:.2f}%")
# 4. Update ratings based on the match result
new_ratings = pl_model.rate(teams, ranks=ranks)
# 5. Show the changes in Mu, Sigma, and Conservative Skill
print("\nRating changes after the match:")
for i, name in enumerate(participants):
old_mu, old_sigma = base_ratings[name] # Initial rating
new_mu = new_ratings[i][0].mu
new_sigma = new_ratings[i][0].sigma
delta_mu = new_mu - old_mu
delta_sigma = new_sigma - old_sigma
old_skill = old_mu - 3 * old_sigma
new_skill = new_mu - 3 * new_sigma
delta_skill = new_skill - old_skill
print(
f" {name} (Rank {ranks[i]}): "
f"ΔMu = {delta_mu:.3f}, ΔSigma = {delta_sigma:.3f}, ΔSkill = {delta_skill:.3f}"
)
# SCENARIO 1: 2-player match
participants_2 = ["Spainer", "Cris"]
ranks_2 = [1, 2]
simulate_scenario("2 players", participants_2, ranks_2)
# SCENARIO 2: 4-player match
participants_4 = ["Spainer", "Cris", "Spawn", "Rubén"]
ranks_4 = [1, 2, 3, 4]
simulate_scenario("4 players", participants_4, ranks_4)
# SCENARIO 3: 6-player match
participants_6 = ["Spainer", "Cris", "Spawn", "Rubén", "Ines", "Manwa"]
ranks_6 = [1, 2, 3, 4, 5, 6]
simulate_scenario("6 players", participants_6, ranks_6)
# SCENARIO 4: 8-player match
participants_8 = ["Spainer", "Cris", "Spawn", "Rubén", "Ines", "Manwa", "Shuku", "Elias"]
ranks_8 = [1, 2, 3, 4, 5, 6, 7, 8]
simulate_scenario("8 players", participants_8, ranks_8)
With the following output:
=== Simulation: 2 players ===
Winning probability (prior to the result):
Spainer: 44.88%
Spawn: 55.12%
Rating changes after the match:
Spainer (Rank 1): ΔMu = 0.233, ΔSigma = -0.002, ΔSkill = 0.238
Spawn (Rank 2): ΔMu = -0.306, ΔSigma = -0.005, ΔSkill = -0.292
=== Simulation: 4 players ===
Winning probability (prior to the result):
Spainer: 26.55%
Spawn: 29.80%
Cris: 15.46%
Rubén: 28.19%
Rating changes after the match:
Spainer (Rank 1): ΔMu = 0.225, ΔSigma = 0.001, ΔSkill = 0.222
Spawn (Rank 2): ΔMu = 0.134, ΔSigma = -0.002, ΔSkill = 0.140
Cris (Rank 3): ΔMu = 0.095, ΔSigma = -0.012, ΔSkill = 0.132
Rubén (Rank 4): ΔMu = -0.479, ΔSigma = -0.004, ΔSkill = -0.467
=== Simulation: 6 players ===
Winning probability (prior to the result):
Spainer: 16.03%
Spawn: 17.96%
Cris: 9.68%
Rubén: 17.00%
Ines: 23.49%
Manwa: 15.85%
Rating changes after the match:
Spainer (Rank 1): ΔMu = 0.203, ΔSigma = 0.002, ΔSkill = 0.198
Spawn (Rank 2): ΔMu = 0.198, ΔSigma = 0.000, ΔSkill = 0.196
Cris (Rank 3): ΔMu = 0.280, ΔSigma = -0.003, ΔSkill = 0.290
Rubén (Rank 4): ΔMu = 0.025, ΔSigma = -0.001, ΔSkill = 0.028
Ines (Rank 5): ΔMu = -0.650, ΔSigma = -0.031, ΔSkill = -0.556
Manwa (Rank 6): ΔMu = -0.606, ΔSigma = -0.007, ΔSkill = -0.585
=== Simulation: 8 players ===
Winning probability (prior to the result):
Spainer: 14.29%
Spawn: 15.54%
Cris: 9.84%
Rubén: 14.92%
Ines: 18.99%
Manwa: 14.12%
Shuku: 4.36%
Elias: 7.95%
Rating changes after the match:
Spainer (Rank 1): ΔMu = 0.181, ΔSigma = 0.002, ΔSkill = 0.176
Spawn (Rank 2): ΔMu = 0.191, ΔSigma = 0.001, ΔSkill = 0.188
Cris (Rank 3): ΔMu = 0.280, ΔSigma = -0.001, ΔSkill = 0.283
Rubén (Rank 4): ΔMu = 0.079, ΔSigma = 0.000, ΔSkill = 0.078
Ines (Rank 5): ΔMu = -0.179, ΔSigma = -0.017, ΔSkill = -0.128
Manwa (Rank 6): ΔMu = -0.154, ΔSigma = -0.005, ΔSkill = -0.140
Shuku (Rank 7): ΔMu = -0.096, ΔSigma = -0.003, ΔSkill = -0.088
Elias (Rank 8): ΔMu = -0.699, ΔSigma = -0.006, ΔSkill = -0.680
We can observe that in a 2-player match, Spainer, with a 44% chance of winning, gains 0.238 skill points.
In a 4-player match, Spainer's win probability drops to 24%, but he gains fewer points (0.222).
In an 8-player match, Spainer's win probability drops further to 14%, yet he only gains 0.176 points.
Meanwhile, Spawn, who initially had a higher probability of winning in the 2-player match, starts gaining more points than Spainer in the 8-player match.
If I keep adding more players, Spainer (or any player in 1st place) would progressively lose points, making placement in a Battle Royale-type game irrelevant. Finishing 1st or 10th wouldn’t matter much.
If this is not a bug, the algorithm favors smaller matches because points are distributed among all participants. In larger matches, there’s no significant advantage for the winner.
I honestly don’t know how to fix this, as I’m not a mathematician.
Beta Was this translation helpful? Give feedback.
All reactions
-
OpenSkill Version: 6.1.0-alpha.0 === Simulation: 2 players === Winning probability (prior to the result): Spainer: 66.98% Cris: 33.02% Rating changes after the match: Spainer (Rank 1): ΔMu = 0.167, ΔSigma = -0.001, ΔSkill = 0.171 Cris (Rank 2): ΔMu = -0.360, ΔSigma = -0.014, ΔSkill = -0.318 === Simulation: 4 players === Winning probability (prior to the result): Spainer: 26.55% Cris: 15.46% Spawn: 29.80% Rubén: 28.19% Rating changes after the match: Spainer (Rank 1): ΔMu = 0.225, ΔSigma = 0.001, ΔSkill = 0.222 Cris (Rank 2): ΔMu = 0.365, ΔSigma = -0.007, ΔSkill = 0.384 Spawn (Rank 3): ΔMu = -0.070, ΔSigma = -0.004, ΔSkill = -0.057 Rubén (Rank 4): ΔMu = -0.440, ΔSigma = -0.004, ΔSkill = -0.428 === Simulation: 6 players === Winning probability (prior to the result): Spainer: 16.03% Cris: 9.68% Spawn: 17.96% Rubén: 17.00% Ines: 23.49% Manwa: 15.85% Rating changes after the match: Spainer (Rank 1): ΔMu = 0.203, ΔSigma = 0.002, ΔSkill = 0.198 Cris (Rank 2): ΔMu = 0.378, ΔSigma = -0.001, ΔSkill = 0.382 Spawn (Rank 3): ΔMu = 0.121, ΔSigma = -0.000, ΔSkill = 0.122 Rubén (Rank 4): ΔMu = 0.030, ΔSigma = -0.001, ΔSkill = 0.033 Ines (Rank 5): ΔMu = -0.632, ΔSigma = -0.031, ΔSkill = -0.539 Manwa (Rank 6): ΔMu = -0.599, ΔSigma = -0.007, ΔSkill = -0.578 === Simulation: 8 players === Winning probability (prior to the result): Spainer: 14.29% Cris: 9.84% Spawn: 15.54% Rubén: 14.92% Ines: 18.99% Manwa: 14.12% Shuku: 4.36% Elias: 7.95% Rating changes after the match: Spainer (Rank 1): ΔMu = 0.181, ΔSigma = 0.002, ΔSkill = 0.176 Cris (Rank 2): ΔMu = 0.347, ΔSigma = -0.000, ΔSkill = 0.347 Spawn (Rank 3): ΔMu = 0.140, ΔSigma = 0.001, ΔSkill = 0.138 Rubén (Rank 4): ΔMu = 0.081, ΔSigma = 0.000, ΔSkill = 0.080 Ines (Rank 5): ΔMu = -0.171, ΔSigma = -0.017, ΔSkill = -0.121 Manwa (Rank 6): ΔMu = -0.151, ΔSigma = -0.005, ΔSkill = -0.137 Shuku (Rank 7): ΔMu = -0.094, ΔSigma = -0.003, ΔSkill = -0.087 Elias (Rank 8): ΔMu = -0.696, ΔSigma = -0.006, ΔSkill = -0.677
Same issue again. You most definitely do not have the latest alpha installed. Or if you do, your virtual environment is not picking it up. Add a simple print statement inside your script to print the version.
Here is what I used:
import openskill from openskill.models import PlackettLuce print(f"OpenSkill Version: {openskill.__version__}") # Instantiate the model (adjust parameters if desired) pl_model = PlackettLuce() # Define the base ratings (mu, sigma) according to your table base_ratings = { "Spawn": (34.981, 1.921), "Spainer": (34.155, 1.675), "Rubén": (34.57, 1.901), "Ines": (37.643, 3.124), "Manwa": (34.07, 2.323), "Cris": (31.255, 2.461), "Elias": (29.951, 2.446), "Estrada": (29.313, 2.466), "Duna": (30.869, 3.253), "Shuku": (27.181, 2.155), "Maca": (29.102, 3.035), } def simulate_scenario(scenario_name, participants, ranks): """ Creates copies of the base ratings and simulates the given match. - participants: list with the names in the order you want to form teams. - ranks: list of positions (1 = first, 2 = second, etc.) """ print(f"\n=== Simulation: {scenario_name} ===") # 1. Copy the initial ratings so as NOT to affect the global state scenario_players = {} for name, (mu, sigma) in base_ratings.items(): scenario_players[name] = pl_model.rating(mu=mu, sigma=sigma, name=name) # 2. Create "teams" (1 player per team) teams = [[scenario_players[name]] for name in participants] # 3. Compute the probability of winning before the match result win_probabilities = pl_model.predict_win(teams) print("Winning probability (prior to the result):") for i, name in enumerate(participants): print(f" {name}: {win_probabilities[i] * 100:.2f}%") # 4. Update ratings based on the match result new_ratings = pl_model.rate(teams, ranks=ranks) # 5. Show the changes in Mu, Sigma, and Conservative Skill print("\nRating changes after the match:") for i, name in enumerate(participants): old_mu, old_sigma = base_ratings[name] # Initial rating new_mu = new_ratings[i][0].mu new_sigma = new_ratings[i][0].sigma delta_mu = new_mu - old_mu delta_sigma = new_sigma - old_sigma old_skill = old_mu - 3 * old_sigma new_skill = new_mu - 3 * new_sigma delta_skill = new_skill - old_skill print( f" {name} (Rank {ranks[i]}): " f"ΔMu = {delta_mu:.3f}, ΔSigma = {delta_sigma:.3f}, ΔSkill = {delta_skill:.3f}" ) # SCENARIO 1: 2-player match participants_2 = ["Spainer", "Cris"] ranks_2 = [1, 2] simulate_scenario("2 players", participants_2, ranks_2) # SCENARIO 2: 4-player match participants_4 = ["Spainer", "Cris", "Spawn", "Rubén"] ranks_4 = [1, 2, 3, 4] simulate_scenario("4 players", participants_4, ranks_4) # SCENARIO 3: 6-player match participants_6 = ["Spainer", "Cris", "Spawn", "Rubén", "Ines", "Manwa"] ranks_6 = [1, 2, 3, 4, 5, 6] simulate_scenario("6 players", participants_6, ranks_6) # SCENARIO 4: 8-player match participants_8 = ["Spainer", "Cris", "Spawn", "Rubén", "Ines", "Manwa", "Shuku", "Elias"] ranks_8 = [1, 2, 3, 4, 5, 6, 7, 8] simulate_scenario("8 players", participants_8, ranks_8)
Beta Was this translation helpful? Give feedback.
All reactions
-
Uhm
That's impossible. I'm running the script right now, and it's working correctly.
In the example you provided, Cris is always the second player, whereas in my script, the second player is Spawn.
import math
import copy
from openskill.models.weng_lin.plackett_luce import PlackettLuce, PlackettLuceRating
# Instantiate the model (adjust parameters if desired)
pl_model = PlackettLuce()
# Define the base ratings (mu, sigma) according to your table
base_ratings = {
"Spawn": (34.981, 1.921),
"Spainer": (34.155, 1.675),
"Rubén": (34.57, 1.901),
"Ines": (37.643, 3.124),
"Manwa": (34.07, 2.323),
"Cris": (31.255, 2.461),
"Elias": (29.951, 2.446),
"Estrada": (29.313, 2.466),
"Duna": (30.869, 3.253),
"Shuku": (27.181, 2.155),
"Maca": (29.102, 3.035),
}
def simulate_scenario(scenario_name, participants, ranks):
"""
Creates copies of the base ratings and simulates the given match.
- participants: list with the names in the order you want to form teams.
- ranks: list of positions (1 = first, 2 = second, etc.)
"""
print(f"\n=== Simulation: {scenario_name} ===")
# 1. Copy the initial ratings so as NOT to affect the global state
scenario_players = {}
for name, (mu, sigma) in base_ratings.items():
scenario_players[name] = pl_model.rating(mu=mu, sigma=sigma, name=name)
# 2. Create "teams" (1 player per team)
teams = [[scenario_players[name]] for name in participants]
# 3. Compute the probability of winning before the match result
win_probabilities = pl_model.predict_win(teams)
print("Winning probability (prior to the result):")
for i, name in enumerate(participants):
print(f" {name}: {win_probabilities[i] * 100:.2f}%")
# 4. Update ratings based on the match result
new_ratings = pl_model.rate(teams, ranks=ranks)
# 5. Show the changes in Mu, Sigma, and Conservative Skill
print("\nRating changes after the match:")
for i, name in enumerate(participants):
old_mu, old_sigma = base_ratings[name] # Initial rating
new_mu = new_ratings[i][0].mu
new_sigma = new_ratings[i][0].sigma
delta_mu = new_mu - old_mu
delta_sigma = new_sigma - old_sigma
old_skill = old_mu - 3 * old_sigma
new_skill = new_mu - 3 * new_sigma
delta_skill = new_skill - old_skill
print(
f" {name} (Rank {ranks[i]}): "
f"ΔMu = {delta_mu:.3f}, ΔSigma = {delta_sigma:.3f}, ΔSkill = {delta_skill:.3f}"
)
# SCENARIO 1: 2-player match
participants_2 = ["Spainer", "Spawn"]
ranks_2 = [1, 2]
simulate_scenario("2 players", participants_2, ranks_2)
# SCENARIO 2: 4-player match
participants_4 = ["Spainer", "Spawn", "Cris", "Rubén"]
ranks_4 = [1, 2, 3, 4]
simulate_scenario("4 players", participants_4, ranks_4)
# SCENARIO 3: 6-player match
participants_6 = ["Spainer", "Spawn", "Cris", "Rubén", "Ines", "Manwa"]
ranks_6 = [1, 2, 3, 4, 5, 6]
simulate_scenario("6 players", participants_6, ranks_6)
# SCENARIO 4: 8-player match
participants_8 = ["Spainer", "Spawn", "Cris", "Rubén", "Ines", "Manwa", "Shuku", "Elias"]
ranks_8 = [1, 2, 3, 4, 5, 6, 7, 8]
simulate_scenario("8 players", participants_8, ranks_8)
Is it fixed if you use this code above?
"I'll attach a screenshot with the installed modules, and I have the version you mentioned."
Regards
Beta Was this translation helpful? Give feedback.
All reactions
-
What does print(openskill.__version__) print within the script? Put the print somewhere in your script and run to see if it's actually installed or a broken install.
Beta Was this translation helpful? Give feedback.
All reactions
-
I added it at the end of the last line of my script, and the output is as follows.
import math
import copy
import openskill
from openskill.models.weng_lin.plackett_luce import PlackettLuce, PlackettLuceRating
# Instantiate the model (adjust parameters if desired)
pl_model = PlackettLuce()
# Define the base ratings (mu, sigma) according to your table
base_ratings = {
"Spawn": (34.981, 1.921),
"Spainer": (34.155, 1.675),
"Rubén": (34.57, 1.901),
"Ines": (37.643, 3.124),
"Manwa": (34.07, 2.323),
"Cris": (31.255, 2.461),
"Elias": (29.951, 2.446),
"Estrada": (29.313, 2.466),
"Duna": (30.869, 3.253),
"Shuku": (27.181, 2.155),
"Maca": (29.102, 3.035),
}
def simulate_scenario(scenario_name, participants, ranks):
"""
Creates copies of the base ratings and simulates the given match.
- participants: list with the names in the order you want to form teams.
- ranks: list of positions (1 = first, 2 = second, etc.)
"""
print(f"\n=== Simulation: {scenario_name} ===")
# 1. Copy the initial ratings so as NOT to affect the global state
scenario_players = {}
for name, (mu, sigma) in base_ratings.items():
scenario_players[name] = pl_model.rating(mu=mu, sigma=sigma, name=name)
# 2. Create "teams" (1 player per team)
teams = [[scenario_players[name]] for name in participants]
# 3. Compute the probability of winning before the match result
win_probabilities = pl_model.predict_win(teams)
print("Winning probability (prior to the result):")
for i, name in enumerate(participants):
print(f" {name}: {win_probabilities[i] * 100:.2f}%")
# 4. Update ratings based on the match result
new_ratings = pl_model.rate(teams, ranks=ranks)
# 5. Show the changes in Mu, Sigma, and Conservative Skill
print("\nRating changes after the match:")
for i, name in enumerate(participants):
old_mu, old_sigma = base_ratings[name] # Initial rating
new_mu = new_ratings[i][0].mu
new_sigma = new_ratings[i][0].sigma
delta_mu = new_mu - old_mu
delta_sigma = new_sigma - old_sigma
old_skill = old_mu - 3 * old_sigma
new_skill = new_mu - 3 * new_sigma
delta_skill = new_skill - old_skill
print(
f" {name} (Rank {ranks[i]}): "
f"ΔMu = {delta_mu:.3f}, ΔSigma = {delta_sigma:.3f}, ΔSkill = {delta_skill:.3f}"
)
# SCENARIO 1: 2-player match
participants_2 = ["Spainer", "Spawn"]
ranks_2 = [1, 2]
simulate_scenario("2 players", participants_2, ranks_2)
# SCENARIO 2: 4-player match
participants_4 = ["Spainer", "Spawn", "Cris", "Rubén"]
ranks_4 = [1, 2, 3, 4]
simulate_scenario("4 players", participants_4, ranks_4)
# SCENARIO 3: 6-player match
participants_6 = ["Spainer", "Spawn", "Cris", "Rubén", "Ines", "Manwa"]
ranks_6 = [1, 2, 3, 4, 5, 6]
simulate_scenario("6 players", participants_6, ranks_6)
# SCENARIO 4: 8-player match
participants_8 = ["Spainer", "Spawn", "Cris", "Rubén", "Ines", "Manwa", "Shuku", "Elias"]
ranks_8 = [1, 2, 3, 4, 5, 6, 7, 8]
simulate_scenario("8 players", participants_8, ranks_8)
print(openskill.__version__)
=== Simulation: 2 players ===
Winning probability (prior to the result):
Spainer: 44.88%
Spawn: 55.12%
Rating changes after the match:
Spainer (Rank 1): ΔMu = 0.233, ΔSigma = -0.002, ΔSkill = 0.238
Spawn (Rank 2): ΔMu = -0.306, ΔSigma = -0.005, ΔSkill = -0.292
=== Simulation: 4 players ===
Winning probability (prior to the result):
Spainer: 26.55%
Spawn: 29.80%
Cris: 15.46%
Rubén: 28.19%
Rating changes after the match:
Spainer (Rank 1): ΔMu = 0.225, ΔSigma = 0.001, ΔSkill = 0.222
Spawn (Rank 2): ΔMu = 0.134, ΔSigma = -0.002, ΔSkill = 0.140
Cris (Rank 3): ΔMu = 0.095, ΔSigma = -0.012, ΔSkill = 0.132
Rubén (Rank 4): ΔMu = -0.479, ΔSigma = -0.004, ΔSkill = -0.467
=== Simulation: 6 players ===
Winning probability (prior to the result):
Spainer: 16.03%
Spawn: 17.96%
Cris: 9.68%
Rubén: 17.00%
Ines: 23.49%
Manwa: 15.85%
Rating changes after the match:
Spainer (Rank 1): ΔMu = 0.203, ΔSigma = 0.002, ΔSkill = 0.198
Spawn (Rank 2): ΔMu = 0.198, ΔSigma = 0.000, ΔSkill = 0.196
Cris (Rank 3): ΔMu = 0.280, ΔSigma = -0.003, ΔSkill = 0.290
Rubén (Rank 4): ΔMu = 0.025, ΔSigma = -0.001, ΔSkill = 0.028
Ines (Rank 5): ΔMu = -0.650, ΔSigma = -0.031, ΔSkill = -0.556
Manwa (Rank 6): ΔMu = -0.606, ΔSigma = -0.007, ΔSkill = -0.585
=== Simulation: 8 players ===
Winning probability (prior to the result):
Spainer: 14.29%
Spawn: 15.54%
Cris: 9.84%
Rubén: 14.92%
Ines: 18.99%
Manwa: 14.12%
Shuku: 4.36%
Elias: 7.95%
Rating changes after the match:
Spainer (Rank 1): ΔMu = 0.181, ΔSigma = 0.002, ΔSkill = 0.176
Spawn (Rank 2): ΔMu = 0.191, ΔSigma = 0.001, ΔSkill = 0.188
Cris (Rank 3): ΔMu = 0.280, ΔSigma = -0.001, ΔSkill = 0.283
Rubén (Rank 4): ΔMu = 0.079, ΔSigma = 0.000, ΔSkill = 0.078
Ines (Rank 5): ΔMu = -0.179, ΔSigma = -0.017, ΔSkill = -0.128
Manwa (Rank 6): ΔMu = -0.154, ΔSigma = -0.005, ΔSkill = -0.140
Shuku (Rank 7): ΔMu = -0.096, ΔSigma = -0.003, ΔSkill = -0.088
Elias (Rank 8): ΔMu = -0.699, ΔSigma = -0.006, ΔSkill = -0.680
6.1.0-alpha.0
Process finished with exit code 0
Beta Was this translation helpful? Give feedback.
All reactions
-
Hi,
I've added this to the compute method:
num_players = sum(len(team) for team in original_teams)
scale_factor = num_players / 2.0 # You can adjust this function, e.g., use math.sqrt(num_players)
omega *= scale_factor
delta *= scale_factor
This way, omega and delta scale depending on the number of players.
I don’t think this is the best solution, as I’m not a mathematician.
However, Plackett-Luce does not work well with free-for-all matches. TrueSkill does, but it struggles with ties.
Here the new output with new changes.
=== Simulation: 2 players ===
Winning probability (prior to the result):
Spainer: 44.10%
Spawn: 55.90%
Rating changes after the match:
Spainer (Rank 1): ΔMu = 0.194, ΔSigma = -0.000, ΔSkill = 0.195
Spawn (Rank 2): ΔMu = -0.217, ΔSigma = -0.001, ΔSkill = -0.213
=== Simulation: 4 players ===
Winning probability (prior to the result):
Spainer: 27.03%
Spawn: 30.81%
Cris: 17.04%
Rubén: 25.12%
Rating changes after the match:
Spainer (Rank 1): ΔMu = 0.370, ΔSigma = 0.001, ΔSkill = 0.367
Spawn (Rank 2): ΔMu = 0.177, ΔSigma = -0.002, ΔSkill = 0.182
Cris (Rank 3): ΔMu = 0.107, ΔSigma = -0.022, ΔSkill = 0.175
Rubén (Rank 4): ΔMu = -0.609, ΔSigma = -0.003, ΔSkill = -0.600
=== Simulation: 6 players ===
Winning probability (prior to the result):
Spainer: 16.43%
Spawn: 18.68%
Cris: 10.66%
Rubén: 15.31%
Ines: 23.40%
Manwa: 15.52%
Rating changes after the match:
Spainer (Rank 1): ΔMu = 0.507, ΔSigma = 0.002, ΔSkill = 0.502
Spawn (Rank 2): ΔMu = 0.414, ΔSigma = 0.000, ΔSkill = 0.413
Cris (Rank 3): ΔMu = 0.747, ΔSigma = -0.011, ΔSkill = 0.780
Rubén (Rank 4): ΔMu = 0.079, ΔSigma = -0.002, ΔSkill = 0.085
Ines (Rank 5): ΔMu = -1.244, ΔSigma = -0.039, ΔSkill = -1.126
Manwa (Rank 6): ΔMu = -1.178, ΔSigma = -0.009, ΔSkill = -1.150
=== Simulation: 8 players ===
Winning probability (prior to the result):
Spainer: 14.66%
Spawn: 16.11%
Cris: 10.68%
Rubén: 13.92%
Ines: 19.02%
Manwa: 14.04%
Shuku: 3.90%
Elias: 7.66%
Rating changes after the match:
Spainer (Rank 1): ΔMu = 0.606, ΔSigma = 0.002, ΔSkill = 0.601
Spawn (Rank 2): ΔMu = 0.537, ΔSigma = 0.001, ΔSkill = 0.535
Cris (Rank 3): ΔMu = 1.011, ΔSigma = -0.008, ΔSkill = 1.035
Rubén (Rank 4): ΔMu = 0.238, ΔSigma = -0.001, ΔSkill = 0.241
Ines (Rank 5): ΔMu = -0.473, ΔSigma = -0.029, ΔSkill = -0.386
Manwa (Rank 6): ΔMu = -0.412, ΔSigma = -0.009, ΔSkill = -0.386
Shuku (Rank 7): ΔMu = -0.235, ΔSigma = -0.006, ΔSkill = -0.218
Elias (Rank 8): ΔMu = -1.768, ΔSigma = -0.011, ΔSkill = -1.734
=== Simulation: 10 players ===
Winning probability (prior to the result):
Spainer: 12.62%
Spawn: 13.70%
Cris: 9.54%
Rubén: 12.07%
Ines: 15.79%
Manwa: 12.15%
Shuku: 4.04%
Elias: 7.16%
Estrada: 6.91%
Maca: 6.02%
Rating changes after the match:
Spainer (Rank 1): ΔMu = 0.691, ΔSigma = 0.002, ΔSkill = 0.685
Spawn (Rank 2): ΔMu = 0.647, ΔSigma = 0.001, ΔSkill = 0.644
Cris (Rank 3): ΔMu = 1.252, ΔSigma = -0.005, ΔSkill = 1.268
Rubén (Rank 4): ΔMu = 0.381, ΔSigma = 0.000, ΔSkill = 0.380
Ines (Rank 5): ΔMu = 0.123, ΔSigma = -0.021, ΔSkill = 0.185
Manwa (Rank 6): ΔMu = -0.004, ΔSigma = -0.006, ΔSkill = 0.013
Shuku (Rank 7): ΔMu = 0.169, ΔSigma = -0.003, ΔSkill = 0.179
Elias (Rank 8): ΔMu = -0.459, ΔSigma = -0.010, ΔSkill = -0.430
Estrada (Rank 9): ΔMu = -1.173, ΔSigma = -0.015, ΔSkill = -1.128
Maca (Rank 10): ΔMu = -3.398, ΔSigma = -0.031, ΔSkill = -3.304
6.1.0-alpha.0
Process finished with exit code 0
Beta Was this translation helpful? Give feedback.
All reactions
-
Hello again!
One thing I didn't take into account when modifying the compute method is that, while it's true that the first-place winner gains more points when there are more players, the last-place player loses significantly more points in larger matches. This also seems counterintuitive because with more players, winning becomes harder, and thus the penalty for losing should arguably be lower.
I'm not a mathematician by any means, and I'm doing everything through trial and error, tweaking things here and there. I don't know if I'm doing something wrong or...
Vivekjoshy, have you implemented this system in free-for-all, battle royale, or similar formats? Because it seems a bit incompatible with these, especially when the number of players varies (e.g., 3-player vs. 6-player matches) and when there are different skill levels.
Best regards.
Beta Was this translation helpful? Give feedback.