Python implementation of Weng-Lin Bayesian ranking, a better, license-free alternative to TrueSkill.
Based on the "Machine Learning" category.
Alternatively, view openskill.py alternatives based on common mentions on social networks and blogs.
* Code Quality Rankings and insights are calculated and provided by Lumnify.
They vary from L1 to L5 with "L5" being the highest.
Do you think we are missing an alternative of openskill.py or a related project?
Tests codecov PyPI - Downloads Documentation Status PyPI - Python Version
Python implementation of Weng-Lin Bayesian ranking, a better, license-free alternative to TrueSkill
This is a port of the amazing openskill.js package.
pip install openskill
>>> from openskill import Rating, rate
>>> a1 = Rating()
>>> a1
Rating(mu=25, sigma=8.333333333333334)
>>> a2 = Rating(mu=32.444, sigma=5.123)
>>> a2
Rating(mu=32.444, sigma=5.123)
>>> b1 = Rating(43.381, 2.421)
>>> b1
Rating(mu=43.381, sigma=2.421)
>>> b2 = Rating(mu=25.188, sigma=6.211)
>>> b2
Rating(mu=25.188, sigma=6.211)
If a1 and a2 are on a team, and wins against a team of b1 and b2, send this into rate:
>>> [[x1, x2], [y1, y2]] = rate([[a1, a2], [b1, b2]])
>>> x1, x2, y1, y2
(Rating(mu=28.669648436582808, sigma=8.071520788025197), Rating(mu=33.83086971107981, sigma=5.062772998705765), Rating(mu=43.071274808241974, sigma=2.4166900452721256), Rating(mu=23.149503312339064, sigma=6.1378606973362135))
You can also create Rating objects by importing create_rating:
>>> from openskill import create_rating
>>> x1 = [28.669648436582808, 8.071520788025197]
>>> x1 = create_rating(x1)
>>> x1
Rating(mu=28.669648436582808, sigma=8.071520788025197)
When displaying a rating, or sorting a list of ratings, you can use ordinal:
>>> from openskill import ordinal
>>> ordinal([43.07, 2.42])
35.81
By default, this returns mu - 3 * sigma, showing a rating for which there's a 99.7% likelihood the player's true rating is higher, so with early games, a player's ordinal rating will usually go up and could go up even if that player loses.
If your teams are listed in one order but your ranking is in a different order, for convenience you can specify a ranks option, such as:
>>> a1 = b1 = c1 = d1 = Rating()
>>> result = [[a2], [b2], [c2], [d2]] = rate([[a1], [b1], [c1], [d1]], rank=[4, 1, 3, 2])
>>> result
[[Rating(mu=20.96265504062538, sigma=8.083731307186588)], [Rating(mu=27.795084971874736, sigma=8.263160757613477)], [Rating(mu=24.68943500312503, sigma=8.083731307186588)], [Rating(mu=26.552824984374855, sigma=8.179213704945203)]]
It's assumed that the lower ranks are better (wins), while higher ranks are worse (losses). You can provide a score instead, where lower is worse and higher is better. These can just be raw scores from the game, if you want.
Ties should have either equivalent rank or score.
>>> a1 = b1 = c1 = d1 = Rating()
>>> result = [[a2], [b2], [c2], [d2]] = rate([[a1], [b1], [c1], [d1]], score=[37, 19, 37, 42])
>>> result
[[Rating(mu=24.68943500312503, sigma=8.179213704945203)], [Rating(mu=22.826045021875203, sigma=8.179213704945203)], [Rating(mu=24.68943500312503, sigma=8.179213704945203)], [Rating(mu=27.795084971874736, sigma=8.263160757613477)]]
You can compare two or more teams to get the probabilities of each team winning.
>>> from openskill import predict_win
>>> a1 = Rating()
>>> a2 = Rating(mu=33.564, sigma=1.123)
>>> predictions = predict_win(teams=[[a1], [a2]])
>>> predictions
[0.45110901512761536, 0.5488909848723846]
>>> sum(predictions)
1.0
You can compare two or more teams to get the probabilities of the match drawing.
>>> from openskill import predict_draw
>>> a1 = Rating()
>>> a2 = Rating(mu=33.564, sigma=1.123)
>>> prediction = predict_draw(teams=[[a1], [a2]])
>>> prediction
0.09025541153402594
The default model is PlackettLuce. You can import alternate models from openskill.models like so:
>>> from openskill.models import BradleyTerryFull
>>> a1 = b1 = c1 = d1 = Rating()
>>> rate([[a1], [b1], [c1], [d1]], rank=[4, 1, 3, 2], model=BradleyTerryFull)
[[Rating(mu=17.09430584957905, sigma=7.5012190693964005)], [Rating(mu=32.90569415042095, sigma=7.5012190693964005)], [Rating(mu=22.36476861652635, sigma=7.5012190693964005)], [Rating(mu=27.63523138347365, sigma=7.5012190693964005)]]
BradleyTerryFull: Full Pairing for Bradley-TerryBradleyTerryPart: Partial Pairing for Bradley-TerryPlackettLuce: Generalized Bradley-TerryThurstoneMostellerFull: Full Pairing for Thurstone-MostellerThurstoneMostellerPart: Partial Pairing for Thurstone-MostellerYou can learn more about how to configure this library to suit your custom needs in the project documentation.
*Note that all licence references and agreements mentioned in the openskill.py README section above
are relevant to that project's source code only.
Do not miss the trending, packages, news and articles with our weekly report.