Saturday, June 18, 2016

What's the Value of a Win?

In a previous entry I demonstrated one simple way to estimate an exponent for the Pythagorean win expectation. Another nice consequence of a Pythagorean win expectation formula is that it also makes it simple to estimate the run value of a win in baseball, the point value of a win in basketball, the goal value of a win in hockey etc.

Let our Pythagorean win expectation formula be \[ w=\frac{P^e}{P^e+1},\] where \(w\) is the win fraction expectation, \(P\) is runs/allowed (or similar) and \(e\) is the Pythagorean exponent. How do we get an estimate for the run value of a win? The expected number of games won in a season with \(g\) games is \[W = g\cdot w = g\cdot \frac{P^e}{P^e+1},\] so for one estimate we only need to compute the value of the partial derivative \(\frac{\partial W}{\partial P}\) at \(P=1\). Note that \[ W = g\left( 1-\frac{1}{P^e+1}\right), \] and so \[ \frac{\partial W}{\partial P} = g\frac{eP^{e-1}}{(P^e+1)^2}\] and it follows \[ \frac{\partial W}{\partial P}(P=1) = \frac{ge}{4}.\] Our estimate for the run value of a win now follows by setting \[\frac{\Delta W}{\Delta P} = \frac{ge}{4} \] giving \[ \Delta W = 1 = \frac{ge}{4} \Delta P.\] What is \(\Delta P\)? Well \(P = R/A\), where \(R\) is runs scored over the season and \(A\) is runs allowed over the season. We're assuming this is a league average team and asking how many more runs they'd need to score to win an additional game, so \(A\) is actually fixed at \(L\), the league average number of runs scored (or allowed). This gives us \[1 = \frac{ge}{4} \Delta P = \frac{ge\Delta R}{4L}.\] Now \(L/g = l\), the league average runs per game, so we arrive at the estimate \[\Delta R = \frac{4l}{e}.\] In the specific case of MLB, we have \(e = 1.8\) and \(l = 4.3\), giving that a win is approximately \(\Delta R = 9.56\) runs.

Bill James originally used the exponent \(e=2\); in this case the formula simplifies to \(\Delta R = 2l\), i.e. we get the particularly simple result that a win is equal to approximately twice the average number of runs scored per game.

Applying this estimate to the NBA, a win is approximately \( \Delta R = \frac{4\cdot 101}{16.4} = 24.6\) points. Similarly, we get the estimates for a win of 4.5 goals for the NHL and 5.1 goals for the Premier League.

Wednesday, June 8, 2016

A Simple Estimate for Pythagorean Exponents

Given the number of runs scored and runs allowed by a baseball team, what's a good estimate for that team's win fraction? Bill James famously came up with what he called the "Pythagorean expectation" \[w = \frac{R^2}{R^2 + A^2},\] which can also be written as \[w = \frac{{(R/A)}^2}{{(R/A)}^2 + 1}.\] More generally, if team \(i\) scores \(R_i\) and allows \(A_i\) runs, the Pythagorean estimate for the probability of team \(1\) beating team \(2\) is \[w = \frac{{(R_1/A_1)}^2}{{(R_1/A_1)}^2 + (R_2/A_2)^2}.\] We can see that the estimate of the team's win fraction is a consequence of this, as an average team would by definition have \(R_2 = A_2\). Now, there's nothing magical about the exponent being 2; it's a coincidence, and in fact is not even the "best" exponent. But what's a good way to estimate the exponent? Note the structural similarity of this win probability estimator and the Bradley-Terry estimator \[ w = \frac{P_1}{P_1+P_2}.\] Here the \(P_i\) are what we could call the "Bradley-Terry power" of the team. This immediately suggests one way to estimate the expectation model's exponent - fit a Bradley-Terry model, then fit the log-linear regression \(\log(P_i)\) vs \(\log(R_i/A_i)\). The slope of this regression will be one estimate for the expectation exponent.

How well does this work? I get 1.727 for MLB in 2014. The R code and data files for MLB and other sports may be found in my GitHub repo.

Monday, May 2, 2016

Behind the Speadsheet

In the book "The Only Rule Is It Has to Work: Our Wild Experiment Building a New Kind of Baseball Team", Ben Lindbergh and Sam Miller recount a grand adventure to take command of an independent league baseball team, with the vision of trying every idea, sane or crazy, in an attempt to achieve a winning edge. Five infielders, four outfielders, defensive shifts, optimizing lineups - everything.

It was really an impossible task. Professional sports at every level are filled with highly accomplished and competitive athletes, with real lives and real egos. Now imagine walking in one day and suddenly trying to convince them that they should be doing things differently. Who do you think you are?

I was one of the analysts who helped Ben and Sam in this quest, and I wanted to write some thoughts down from my own perspective, not as one of the main characters, but as someone more behind the scenes. These are some very short initial thoughts only, but I'd like to followup with some more ideas on where things went wrong from my perspective, and also how independent league teams can better identify roster talent from some non-traditional sources.

My focus was on attempting to identify talent overlooked in the MLB draft. This is extremely challenging; there are 30 teams, 40 standards rounds plus other picks. Furthermore, among those players left, many sign as amateur free agents post-draft. You're left with players from lower divisions, very small schools, 23-year-old seniors, bad bodies, soft tossers, poor defenders, etc. But, still, there may be players who aren't good MLB prospects, but who could still perform well as part of an independent league team.

Looking at top framing college catchers was a bust; this is a premium defensive position and very little is overlooked.

Among the undrafted senior hitters and pitchers there were several potential prospects, many of whom you'll read about in the book. The most important fact to keep in mind is that these are real people with real lives, real families and real hopes and dreams, and playing independent ball isn't nearly lucrative enough to pay the bills. Harsh reality will limit your pool even more, and those who choose to pursue it will face the additional stress of financial strain.

That being said, was Ben and Sam's experiment a success? You'll have to read the book, but absolutely, some talent was found.

Tuesday, February 9, 2016

When is a Lead Safe in the NBA?

Assuming two NBA teams of equal strength with \(t\) seconds remaining, what is a safe lead at a prescribed confidence level? Bill James has a safe lead formula for NCAA basketball, and the topic has been addressed by other researchers at various levels of complexity, e.g. Clauset, Kogan and Redner.

I'll present a simple derivation. Start by observing there are about 50 scoring groups per team per game (scoring groups include all baskets and free throws that occur at the same time), with each scoring group worth about two points. Assume scoring events by team are Poisson distributed with parameter \(\lambda = \frac{50\cdot t}{48\cdot 60} = \frac{t}{57.6}\). Using a normal approximation, the difference of these two distributions is normal with mean 0 and variance \(\sqrt{2}\lambda\), giving a standard deviation of \(0.1863\sqrt{t}\).

Using this approximation, what is a 90% safe lead? A 90% tail is 1.28 standard deviations, \(1.28\cdot 0.1863\sqrt{t} = 0.2385\sqrt{t}\) scoring groups. As a scoring group is about two points, this means a 90% safe lead, assuming a jump ball, is about \(0.477\sqrt{t}\) points (Clauset et. al. obtained \(0.4602\sqrt{t}\)). For example, a safe lead at halftime is approximately \(0.477 \sqrt{24\cdot 60} = 18.1\) points.

Next - adjustments for possession arrow and shot clock time; validity of approximation; adjusting for team strengths.

Thursday, October 15, 2015

An Enormous Number of Kilograms

For years the kilogram has been defined with respect to a platinum and iridium cylinder, but this is now no longer the case. Here's a puzzle about kilograms that's easy to state and understand, but the answer is very, very surprising.

I've always had a fascination with really large numbers. First 100 when I was really little, and as I got older and more sophisticated numbers like a googol and the smallest number that satisfies the conditions of the Archimedes cattle problem.

When I was an undergraduate I interviewed for a summer internship with an insurance company as an actuarial student. They gave me the following puzzle - what's the smallest number that when you move the last digit to the front it multiplies by 2? I calculated for a little while, then said "This can't be right, my answer has 18 digits!". It turns out that the smallest solution does, indeed, have 18 digits.

We can see this by letting our \((n+1)\)-digit number \( x = 10 m + a\), where \(m\) is an \(n\)-digit number and \(0\leq a < 10\). Moving \(a\) to the front we get \(y = 10^n a + m\), and our requirement is \(y = 2x\). This gives: \begin{align} 20 m + 2 &= 10^n a + m \\ 19 m &= a(10^n-2) \\ m &= \frac{2a(5\cdot 10^{n-1} - 1)}{19} \end{align} The smallest \(m\), if one exists, requires \(a,n\) such that 19 divides \(5\cdot 10^{n-1}-1\) (as 19 can't divide \(2 a\)) and the result has \(n\)-digits. It's easy to check that the smallest value of \(n\) that satisfies the first condition is \(n=17\). To get the smallest solution we try \(a=1\), but this yields a value with only 16 digits. Setting \(a=2\), however, yields the 17-digit \(m = 10526315789473684\). The smallest solution to our puzzle is therefore the 18-digit number \(105263157894736842\); that's surprisingly large.

Numbers with this type of property are known as parasitic numbers.

Later, I wondered if there were numbers with the slightly different, but equally interesting property, that moving the last digit to the front converted ("autoconverts") it from a value under one unit of measurement to an equivalent value under a different unit of measurement.

The first one I tried was moving the last digit to the front converts from Celsius to Fahrenheit. This is a fun puzzle that eventually made its way into the New York Times. The smallest such value is 275 C, which exactly equals 527 F. What's the next smallest temperature?

How about moving the first digit to the end? We'll need to use the little-known fact that, legally, a pound is exactly equal to 0.45359237 kilograms. Given this, does there exist a number such that moving the first digit to the end converts from pounds to kilograms? The answer is yes, but the smallest solution has 108,437,840 digits! The solution is similar to the above, but as it's computationally more involved I've written Sage code to solve it, which you can find in my GitHub puzzles repository.

The smallest number that autoconverts from gallons to liters, incidentally, is even bigger at 382,614,539 digits!

Saturday, October 10, 2015

Solving a Math Puzzle using Physics

The following math problem, which appeared on a Scottish maths paper, has been making the internet rounds.

The first two parts require students to interpret the meaning of the components of the formula \(T(x) = 5 \sqrt{36+x^2} + 4(20-x) \), and the final "challenge" component involves finding the minimum of \( T(x) \) over \( 0 \leq x \leq 20 \). Usually this would require a differentiation, but if you know Snell's law you can write down the solution almost immediately. People normally think of Snell's law in the context of light and optics, but it's really a statement about least time across media permitting different velocities.

One way to phrase Snell's law is that least travel time is achieved when \[ \frac{\sin{\theta_1}}{\sin{\theta_2}} = \frac{v_1}{v_2},\] where \( \theta_1, \theta_2\) are the angles to the normal and \(v_1, v_2\) are the travel velocities in the two media.

In our puzzle the crocodile has an implied travel velocity of 1/5 in the water and 1/4 on land. Furthermore, the crocodile travels along the riverbank once it hits land, so \( \theta_2 = 90^{\circ}\) and \(\sin{\theta_2} = 1\). Snell's law now says that the path of least time satisfies \[ \sin{\theta_1} = \frac{x}{\sqrt{36+x^2}} = \frac{4}{5},\] giving us \( 25x^2 = 16x^2 + 24^2\). Solving, \( 3^2 x^2 = 24^2, x^2 = 8^2\) and the solution is \(x = 8\).

Sunday, October 4, 2015

Mixed Models in R - Bigger, Faster, Stronger

When you start doing more advanced sports analytics you'll eventually starting working with what are known as hierarchical, nested or mixed effects models. These are models that contain both fixed and random effects. There are multiple ways of defining fixed vs random random effects, but one way I find particularly useful is that random effects are being "predicted" rather than "estimated", and this in turn involves some "shrinkage" towards the mean.

Here's some R code for NCAA ice hockey power rankings using a nested Poisson model (which can be found in my hockey GitHub repository):
model <- gs ~ year+field+d_div+o_div+game_length+(1|offense)+(1|defense)+(1|game_id)
fit <- glmer(model,
The fixed effects are year, field (home/away/neutral), d_div (NCAA division of the defense), o_div (NCAA division of the offense) and game_length (number of overtime periods); offense (strength of offense), defense (strength of defense) and game_id are all random effects. The reason for modeling team offenses and defenses as random vs fixed effects is that I view them as random samples from the same distribution. As mentioned above, this results in statistical shrinkage or "regression to the mean" for these values, which is particularly useful for partially completed seasons.

One of the problems with large mixed models is that they can be very slow to fit. For example, the model above takes several hours on a 12-core workstation, which makes it very difficult to test model changes and tweaks. Is there any way to speed up the fitting process? Certainly! One way is to add two options to the above code:

fit <- glmer(model,
      control=glmerControl(optimizer = "nloptwrap")
What do these do? Model fitting is an optimization process. Part of that process involves the estimation of particular integrals, which can be very slow; the option "nAGQ=0" tells glmer to ignore estimating those integrals. For some models this has minimal impact on parameter estimates, and this NCAA hockey model is one of those. The second option tells glmer to fit using the "nloptwrap" optimizer (there are several other optimizers available, too), which tends to be faster than the default optimization method.

The impact can be rather startling. With the default options the above model takes about 3 hours to fit. Add these two options, and the model fitting takes 30 seconds with minimal impact on the parameter estimates, or approximately 400 times faster.