Skip to main content

Poisson Games and Sudden-Death Overtime

A Dicey Problem: Solution

Problem: Waldo rolls one standard (fair) 6-sided die repeatedly, keeping a running total of what he rolls. To three decimal places, what's the probability that he eventually reaches a total of exactly 1,000,000?

Solution: On average Waldo will be adding \( \mu = (1+2+3+4+5+6)/6 = 3.5 \) per roll, so we would "intuitively" or "naively" expect that the probability of hitting a particular total that's large enough would be close to \( 1/3.5 \). This turns out to be correct.

More generally, if we have any finite set of positive integers with no common factor, then the probability of hitting a particular total for large enough numbers is \( 1/\mu \), where \( \mu \) is the expected value for a single roll. Note that the probabilities associated with the positive integers in our set don't need to be the same, either, just greater than zero and summing to \( 1 \).

Proof: I'll show how you can prove this rigorously for the case where the outcomes are \( 1 \) and \( 2 \), each with probability \( 1/2 \). Let \( p(n) \) be the probability of totaling the value \( n \). The only possible ways to total \( n \) are to reach \( n-2 \) and immediately roll a \( 2 \), or reach \( n-1 \) and immediately roll a \( 1 \). This implies that \( p(n) = 1/2\cdot p(n-1) + 1/2\cdot p(n-2) \), which is a linear, second order, homogeneous difference equation (recurrence relation). I'm going to apply the theory here, but for more details see Wikipedia: Recurrence relation.

In this case the associated characteristic polynomial is \( r^2 = r/2 + 1/2 \) with roots \( r_1 = 1, r_2 = -1/2 \). So we have \( p(n) = c_1 + c_2\cdot (-1/2)^n \) for constants \( c_1, c_2 \). The boundary conditions are \( p(0) = 1 \) and \( p(1) = 1/2 \); solving we get \( c_1 = 2/3 \) and \( c_2 = 1/3 \). This implies that \( p(n) \) converges to \( 2/3 \) very rapidly; exponentially, in fact.

Comments

  1. Really interesting. I wonder if you have a book where I can I understand better the difference equations?

    ReplyDelete
  2. Esteban, I'll write a blog entry on difference equations. They're nice to know and a very useful framework for analyzing recursive problems. Algorithms are a common setting, but far from the only one.

    ReplyDelete

Post a Comment

Popular posts from this blog

A Bayes' Solution to Monty Hall

For any problem involving conditional probabilities one of your greatest allies is Bayes' Theorem. Bayes' Theorem says that for two events A and B, the probability of A given B is related to the probability of B given A in a specific way.

Standard notation:

probability of A given B is written \( \Pr(A \mid B) \)
probability of B is written \( \Pr(B) \)

Bayes' Theorem:

Using the notation above, Bayes' Theorem can be written: \[ \Pr(A \mid B) = \frac{\Pr(B \mid A)\times \Pr(A)}{\Pr(B)} \]Let's apply Bayes' Theorem to the Monty Hall problem. If you recall, we're told that behind three doors there are two goats and one car, all randomly placed. We initially choose a door, and then Monty, who knows what's behind the doors, always shows us a goat behind one of the remaining doors. He can always do this as there are two goats; if we chose the car initially, Monty picks one of the two doors with a goat behind it at random.

Assume we pick Door 1 and then Monty sho…

What's the Value of a Win?

In a previous entry I demonstrated one simple way to estimate an exponent for the Pythagorean win expectation. Another nice consequence of a Pythagorean win expectation formula is that it also makes it simple to estimate the run value of a win in baseball, the point value of a win in basketball, the goal value of a win in hockey etc.

Let our Pythagorean win expectation formula be \[ w=\frac{P^e}{P^e+1},\] where \(w\) is the win fraction expectation, \(P\) is runs/allowed (or similar) and \(e\) is the Pythagorean exponent. How do we get an estimate for the run value of a win? The expected number of games won in a season with \(g\) games is \[W = g\cdot w = g\cdot \frac{P^e}{P^e+1},\] so for one estimate we only need to compute the value of the partial derivative \(\frac{\partial W}{\partial P}\) at \(P=1\). Note that \[ W = g\left( 1-\frac{1}{P^e+1}\right), \] and so \[ \frac{\partial W}{\partial P} = g\frac{eP^{e-1}}{(P^e+1)^2}\] and it follows \[ \frac{\partial W}{\partial P}(P=1) = …

Solving a Math Puzzle using Physics

The following math problem, which appeared on a Scottish maths paper, has been making the internet rounds.


The first two parts require students to interpret the meaning of the components of the formula \(T(x) = 5 \sqrt{36+x^2} + 4(20-x) \), and the final "challenge" component involves finding the minimum of \( T(x) \) over \( 0 \leq x \leq 20 \). Usually this would require a differentiation, but if you know Snell's law you can write down the solution almost immediately. People normally think of Snell's law in the context of light and optics, but it's really a statement about least time across media permitting different velocities.


One way to phrase Snell's law is that least travel time is achieved when \[ \frac{\sin{\theta_1}}{\sin{\theta_2}} = \frac{v_1}{v_2},\] where \( \theta_1, \theta_2\) are the angles to the normal and \(v_1, v_2\) are the travel velocities in the two media.

In our puzzle the crocodile has an implied travel velocity of 1/5 in the water …