Skip to main content

Poisson Games and Sudden-Death Overtime

Lunchtime Sports Science: Introducing tanh5

As I mentioned in a previous article on ratings systems, the log5 estimate for participant 1 beating participant 2 given respective success probabilities \( p_1, p_2 \) is
\begin{align}
p &= \frac{p_1 q_2}{p_1 q_2+q_1 p_2}\\
&= \frac{p_1/q_1}{p_1/q_1+p_2/q_2}\\
\frac{p}{q} &= \frac{p_1}{q_1} \cdot \frac{q_2}{p_2}
\end{align} where \( q_1=1-p_1, q_2=1-p_2, q=1-p \).

Where does this come from? Assume that both participants each played average opposition. In a Bradley-Terry setting, this means
\begin{align}
p_1 &= \frac{R_1}{R_1 + 1}\\
p_2 &= \frac{R_2}{R_2 + 1},
\end{align} where \( R_1 \) and \( R_2 \) are the (latent) Bradley-Terry ratings; the \( 1 \) in the denominators is an estimate for the average rating of the participants they've played en route to achieving their respective success probabilities.

In a Bradley-Terry setting, it's true that the product of the ratings in the entire pool is taken to equal 1. But participants don't play themselves! Thus, if participant 1 played every participant but itself, the average opponent would have rating \( R \), where \( R_1 \cdot R^{n-1} = 1 \). Here \( n \) is the number of participants in the pool.

Our strength estimate for the average opponent faced is then
\begin{align}
R &= {R_1}^{-\frac{1}{n-1}}.
\end{align}
There are two extreme cases. If \( n=2 \), then \( R = \frac{1}{R_1} \); as \( n \to +\infty \), \( R \to 1 \).

The limit extreme case is log5; the \(n=2\) extreme case I call tanh5. We compute
\begin{align}
p_1 &= \frac{R_1}{R_1 + 1/R_1}\\
p_2 &= \frac{R_2}{R_2 + 1/R_2}\\
\textrm{tanh5} = p &= \frac{ \sqrt{p_1 q_2} }{ \sqrt{p_1 q_2} + \sqrt{q_1 p_2} }.
\end{align}
Why tanh5? We can think of log5 as derived from the logistic function by setting \( \log(R)=0 \) for the opponent's rating; analogously, tanh5 is derived from the hyperbolic tangent function by setting \( \log(R)=0 \) for the opponent's rating.

Note that we have a spectrum of estimates corresponding to each value for \(n\), but these are the two extremes. This also gives a new spectrum of activation functions for neural networks, but I'll explore this application later.

Comments

Popular posts from this blog

A Bayes' Solution to Monty Hall

For any problem involving conditional probabilities one of your greatest allies is Bayes' Theorem. Bayes' Theorem says that for two events A and B, the probability of A given B is related to the probability of B given A in a specific way.

Standard notation:

probability of A given B is written \( \Pr(A \mid B) \)
probability of B is written \( \Pr(B) \)

Bayes' Theorem:

Using the notation above, Bayes' Theorem can be written: \[ \Pr(A \mid B) = \frac{\Pr(B \mid A)\times \Pr(A)}{\Pr(B)} \]Let's apply Bayes' Theorem to the Monty Hall problem. If you recall, we're told that behind three doors there are two goats and one car, all randomly placed. We initially choose a door, and then Monty, who knows what's behind the doors, always shows us a goat behind one of the remaining doors. He can always do this as there are two goats; if we chose the car initially, Monty picks one of the two doors with a goat behind it at random.

Assume we pick Door 1 and then Monty sho…

What's the Value of a Win?

In a previous entry I demonstrated one simple way to estimate an exponent for the Pythagorean win expectation. Another nice consequence of a Pythagorean win expectation formula is that it also makes it simple to estimate the run value of a win in baseball, the point value of a win in basketball, the goal value of a win in hockey etc.

Let our Pythagorean win expectation formula be \[ w=\frac{P^e}{P^e+1},\] where \(w\) is the win fraction expectation, \(P\) is runs/allowed (or similar) and \(e\) is the Pythagorean exponent. How do we get an estimate for the run value of a win? The expected number of games won in a season with \(g\) games is \[W = g\cdot w = g\cdot \frac{P^e}{P^e+1},\] so for one estimate we only need to compute the value of the partial derivative \(\frac{\partial W}{\partial P}\) at \(P=1\). Note that \[ W = g\left( 1-\frac{1}{P^e+1}\right), \] and so \[ \frac{\partial W}{\partial P} = g\frac{eP^{e-1}}{(P^e+1)^2}\] and it follows \[ \frac{\partial W}{\partial P}(P=1) = …

Solving a Math Puzzle using Physics

The following math problem, which appeared on a Scottish maths paper, has been making the internet rounds.


The first two parts require students to interpret the meaning of the components of the formula \(T(x) = 5 \sqrt{36+x^2} + 4(20-x) \), and the final "challenge" component involves finding the minimum of \( T(x) \) over \( 0 \leq x \leq 20 \). Usually this would require a differentiation, but if you know Snell's law you can write down the solution almost immediately. People normally think of Snell's law in the context of light and optics, but it's really a statement about least time across media permitting different velocities.


One way to phrase Snell's law is that least travel time is achieved when \[ \frac{\sin{\theta_1}}{\sin{\theta_2}} = \frac{v_1}{v_2},\] where \( \theta_1, \theta_2\) are the angles to the normal and \(v_1, v_2\) are the travel velocities in the two media.

In our puzzle the crocodile has an implied travel velocity of 1/5 in the water …