Skip to main content

### Power Rankings: Looking at a Very Simple Method

One of the simplest and most common power ranking models is known as the Bradley-Terry-Luce model, which is equivalent to other famous models such the logistic model and the Elo rating system. I'll be referring to "teams" here, but of course the same ideas apply to any two-participant game.

Let me clarify what I mean when I use the term "power ranking". A power ranking supplies not only a ranking of teams, but also provides numbers that may be used to estimate the probabilities of various outcomes were two particular teams to play a match.

In the BTL power ranking system we assume the teams have some latent (hidden/unknown) "strength" $$R_i$$, and that the probability of $$i$$ beating $$j$$ is $$\frac{R_i}{R_i+R_j}$$. Note that each $$R_i$$ is assumed to be strictly positive. Where does this model structure come from?

Here are three reasonable constraints for a power ranking model:
1.  If $$R_i$$ and $$R_j$$ have equal strength, the probability of one beating the other should be $$\frac{1}{2}$$.
2. As the strength of one team strictly approaches 0 (infinitely weak) with the other team fixed, the probability of the other team winning strictly increases to 1.
3. As the strength of one team strictly approaches 1 (infinitely strong) with the other team fixed, the probability of the other team winning strictly decreases to 0.
Note that our model structure satisfies all three constraints. Can you think of other simple model structures that satisfy all three constraints?

Given this model and a set of teams and match results, how can we estimate the $$R_i$$. The maximum-likelihood estimators are the set of $$R_i$$ that maximizes the probability of the observed outcomes actually happening. For any given match this probability of team $$i$$ beating team $$j$$ is $$\frac{R_i}{R_i+R_j}$$, so the overall probability of the observed outcomes of the matches $$M$$ occurring is $\mathcal{L} = \prod_{m\in M} \frac{R_{w(m)}}{R_{w(m)}+R_{l(m)}},$ where $$w(m)$$ is then winner and $$l(m)$$ the loser of match $$m$$. We can transform this into a sum by taking logarithms; $\log\left( \mathcal{L} \right) = \log\left(R_{w(m)}\right) - \log\left(R_{w(m)}+R_{l(m)}\right).$ Before going further, let's make a useful reparameterization by setting $$e^{r_i} = R_i$$; this makes sense as we're requiring the $$R_i$$ to be strictly positive. We then get $\log\left( \mathcal{L} \right) = r_{w(m)} - \log\left(e^{r_{w(m)}}+e^{r_{l(m)}}\right).$ Taking partial derivatives we get \begin{eqnarray*}
\frac{\partial \log\left( \mathcal{L} \right)}{\partial r_i} &=& \sum_{w(m)=i} 1 - \frac{e^{r_{w(m)}}}{e^{r_{w(m)}}+e^{r_{l(m)}}} + \sum_{l(m)=i} - \frac{e^{r_{l(m)}}}{e^{r_{w(m)}}+e^{r_{l(m)}}}\\
&=& \sum_{w(m)=i} 1 - \frac{e^{r_i}}{e^{r_i}+e^{r_{l(m)}}} + \sum_{l(m)=i} - \frac{e^{r_i}}{e^{r_{w(m)}}+e^{r_i}}\\
&=&0.
\end{eqnarray*} But this is just the number of actual wins minus the expected wins! Thus, the maximum likelihood estimators for the $$r_i$$ satisfy $$O_i = E_i$$ for all teams $$i$$, where $$O_i$$ is the actual (observed) number of wins for team $$i$$, and $$E_i$$ is the expected number of wins for team $$i$$ based on our model. That's a nice property!

If you'd like to experiment with some actual data, and to see that the resulting fit does indeed satisfy this property, here's an example BTL model using NCAA men's ice hockey scores. You can, of course, actually use this property to iteratively solve for the MLE estimators $$R_i$$. Note that you'll have to fix one of the $$R_i$$ to be a particular value (or add some other constraint), as the model probabilities are invariant with respect to multiplication of the $$R_i$$ by the same positive scalar.

### A Bayes' Solution to Monty Hall

For any problem involving conditional probabilities one of your greatest allies is Bayes' Theorem. Bayes' Theorem says that for two events A and B, the probability of A given B is related to the probability of B given A in a specific way.

Standard notation:

probability of A given B is written $$\Pr(A \mid B)$$
probability of B is written $$\Pr(B)$$

Bayes' Theorem:

Using the notation above, Bayes' Theorem can be written: $\Pr(A \mid B) = \frac{\Pr(B \mid A)\times \Pr(A)}{\Pr(B)}$Let's apply Bayes' Theorem to the Monty Hall problem. If you recall, we're told that behind three doors there are two goats and one car, all randomly placed. We initially choose a door, and then Monty, who knows what's behind the doors, always shows us a goat behind one of the remaining doors. He can always do this as there are two goats; if we chose the car initially, Monty picks one of the two doors with a goat behind it at random.

Assume we pick Door 1 and then Monty sho…

### Mixed Models in R - Bigger, Faster, Stronger

When you start doing more advanced sports analytics you'll eventually starting working with what are known as hierarchical, nested or mixed effects models. These are models that contain both fixed and random effects. There are multiple ways of defining fixed vs random random effects, but one way I find particularly useful is that random effects are being "predicted" rather than "estimated", and this in turn involves some "shrinkage" towards the mean.

Here's some R code for NCAA ice hockey power rankings using a nested Poisson model (which can be found in my hockey GitHub repository):
model <- gs ~ year+field+d_div+o_div+game_length+(1|offense)+(1|defense)+(1|game_id) fit <- glmer(model, data=g, verbose=TRUE, family=poisson(link=log) ) The fixed effects are year, field (home/away/neutral), d_div (NCAA division of the defense), o_div (NCAA division of the offense) and game_length (number of overtime periods); off…

### Gambling to Optimize Expected Median Bankroll

Gambling to optimize your expected bankroll mean is extremely risky, as you wager your entire bankroll for any favorable gamble, making ruin almost inevitable. But what if, instead, we gambled not to maximize the expected bankroll mean, but the expected bankroll median?

Let the probability of winning a favorable bet be $$p$$, and the net odds be $$b$$. That is, if we wager $$1$$ unit and win, we get back $$b$$ units (in addition to our wager). Assume our betting strategy is to wager some fraction $$f$$ of our bankroll, hence $$0 \leq f \leq 1$$. By our assumption, our betting strategy is invariant with respect to the actual size of our bankroll, and so if we were to repeat this gamble $$n$$ times with the same $$p$$ and $$b$$, the strategy wouldn't change. It follows we may assume an initial bankroll of size $$1$$.

Let $$q = 1-p$$. Now, after $$n$$  such gambles our bankroll would have a binomial distribution with probability mass function \[ \Pr(k,n,p) = \binom{n}{k} p^k q^{n-k…