## Saturday, July 11, 2015

### Power Rankings: Looking at a Very Simple Method

One of the simplest and most common power ranking models is known as the Bradley-Terry-Luce model, which is equivalent to other famous models such the logistic model and the Elo rating system. I'll be referring to "teams" here, but of course the same ideas apply to any two-participant game.

Let me clarify what I mean when I use the term "power ranking". A power ranking supplies not only a ranking of teams, but also provides numbers that may be used to estimate the probabilities of various outcomes were two particular teams to play a match.

In the BTL power ranking system we assume the teams have some latent (hidden/unknown) "strength" $$R_i$$, and that the probability of $$i$$ beating $$j$$ is $$\frac{R_i}{R_i+R_j}$$. Note that each $$R_i$$ is assumed to be strictly positive. Where does this model structure come from?

Here are three reasonable constraints for a power ranking model:
1.  If $$R_i$$ and $$R_j$$ have equal strength, the probability of one beating the other should be $$\frac{1}{2}$$.
2. As the strength of one team strictly approaches 0 (infinitely weak) with the other team fixed, the probability of the other team winning strictly increases to 1.
3. As the strength of one team strictly approaches 1 (infinitely strong) with the other team fixed, the probability of the other team winning strictly decreases to 0.
Note that our model structure satisfies all three constraints. Can you think of other simple model structures that satisfy all three constraints?

Given this model and a set of teams and match results, how can we estimate the $$R_i$$. The maximum-likelihood estimators are the set of $$R_i$$ that maximizes the probability of the observed outcomes actually happening. For any given match this probability of team $$i$$ beating team $$j$$ is $$\frac{R_i}{R_i+R_j}$$, so the overall probability of the observed outcomes of the matches $$M$$ occurring is $\mathcal{L} = \prod_{m\in M} \frac{R_{w(m)}}{R_{w(m)}+R_{l(m)}},$ where $$w(m)$$ is then winner and $$l(m)$$ the loser of match $$m$$. We can transform this into a sum by taking logarithms; $\log\left( \mathcal{L} \right) = \log\left(R_{w(m)}\right) - \log\left(R_{w(m)}+R_{l(m)}\right).$ Before going further, let's make a useful reparameterization by setting $$e^{r_i} = R_i$$; this makes sense as we're requiring the $$R_i$$ to be strictly positive. We then get $\log\left( \mathcal{L} \right) = r_{w(m)} - \log\left(e^{r_{w(m)}}+e^{r_{l(m)}}\right).$ Taking partial derivatives we get \begin{eqnarray*}
\frac{\partial \log\left( \mathcal{L} \right)}{\partial r_i} &=& \sum_{w(m)=i} 1 - \frac{e^{r_{w(m)}}}{e^{r_{w(m)}}+e^{r_{l(m)}}} + \sum_{l(m)=i} - \frac{e^{r_{l(m)}}}{e^{r_{w(m)}}+e^{r_{l(m)}}}\\
&=& \sum_{w(m)=i} 1 - \frac{e^{r_i}}{e^{r_i}+e^{r_{l(m)}}} + \sum_{l(m)=i} - \frac{e^{r_i}}{e^{r_{w(m)}}+e^{r_i}}\\
&=&0.
\end{eqnarray*} But this is just the number of actual wins minus the expected wins! Thus, the maximum likelihood estimators for the $$r_i$$ satisfy $$O_i = E_i$$ for all teams $$i$$, where $$O_i$$ is the actual (observed) number of wins for team $$i$$, and $$E_i$$ is the expected number of wins for team $$i$$ based on our model. That's a nice property!

If you'd like to experiment with some actual data, and to see that the resulting fit does indeed satisfy this property, here's an example BTL model using NCAA men's ice hockey scores. You can, of course, actually use this property to iteratively solve for the MLE estimators $$R_i$$. Note that you'll have to fix one of the $$R_i$$ to be a particular value (or add some other constraint), as the model probabilities are invariant with respect to multiplication of the $$R_i$$ by the same positive scalar.

### Getting Started Doing Baseball Analysis without Coding

There's lot of confusion about how best to get started doing baseball analysis. It doesn't have to be difficult! You can start doing it right away, even if you don't know anything about R, Python, Ruby, SQL or machine learning (most GMs can't code). Learning these and other tools makes it easier and faster to do analysis, but they're only part of the process of constructing a well-reasoned argument. They're important, of course, because they can turn 2 months of hard work into 10 minutes of typing. Even if you don't like mathematics, statistics, coding or databases, they're mundane necessities that can make your life much easier and your analysis more powerful.

Here are two example problems. You don't have to do these specifically, but they illustrate the general idea. Write up your solutions, then publish them for other people to make some (hopefully) helpful comments and suggestions. This can be on a blog or through a versioning control platform like GitHub (which is also great for versioning any code or data your use). Try to write well! A great argument, but poorly written and poorly presented isn't going to be very convincing. Once it's finished, review and revise, review and revise, review and revise. When a team you follow makes a move, treat it as a puzzle for you to solve. Why did they do it, and was it a good idea?
2. Pick any MLB team and review the draft picks they made in the 2015 draft for the first 10 rounds. Do you notice any trends or changes from the 2014 draft? Do these picks agree or disagree with the various public pre-draft player rankings? Which picks were designed to save money to help sign other picks? Identify those tough signs. Was the team actually able to sign them, and were the picks to save money still reasonably good picks? Do you best to identify which picks you thought were good and bad, write them down in a notebook with your reasoning, then check back in 6 months and a year. Was your reasoning correct? If not, what were your mistakes and how can you avoid making them in the future?

## Tuesday, March 3, 2015

### Who Controls the Pace in Basketball, Offense or Defense?

During a recent chat with basketball analyst Seth Partnow, he mentioned a topic that came up during a discussion at the recent MIT Sloan Sports Analytics Conference. Who  controls the pace of a game more, the offense or defense? And what is the percentage of pace responsibility for each side? The analysts came up with a rough consensus opinion, but is there a way to answer this question analytically? I came up with one approach that examines the variations in possession times, but it suddenly occurred to me that this question could also be answered immediately by looking at the offense-defense asymmetry of the home court advantage.

As you can see in the R output of my NCAA team model code in one of my public basketball repositories, the offense at home scores points at a rate about $$e^{0.0302} = 1.031$$ times the rate on a neutral court, everything else the same. Likewise, the defense at home allows points at a rate about $$e^{-0.0165} = 0.984$$ times the rate on a neutral court; in both cases the neutral court rate is the reference level. Notice the geometric asymmetry; $$1.031\cdot 0.984 = 1.015 > 1$$. The implication is that the offense is responsible for about the fraction $\frac{(1.031-1)}{(1.031-1)+(1-0.984)} = 0.66$ of the scoring pace. That is, offensive controls 2/3 of the pace, defense 1/3 of the pace. The consensus opinion the analysts came up with at Sloan? It was 2/3 offense, 1/3 defense! It's nice when things work out, isn't it?

I've used NCAA basketball because there are plenty of neutral court games; to examine the NBA directly we'll have to use a more sophisticated (but perhaps less beautiful) approach involving the variation in possession times. I'll do that next, and I'll also show you how to apply this new information to make better game predictions. Finally, there's a nice connection to some recent work on inferring causality that I'll outline.

## Wednesday, February 11, 2015

### Baseball's Billion Dollar Equation

In 1999 Voros McCracken infamously speculated about the amount of control the pitcher had over balls put in play. Not so much, as it turned out, and DIPS was born. It's tough to put a value on something like DIPS, but if an MLB team had developed and exploited it for several years, it could potentially have been worth hundreds of millions of dollars. Likewise, catcher framing could easily have been worth hundreds of millions.

How about a billion dollar equation? Sure, look at the baseball draft. An 8th round draft pick like Paul Goldschmidt could net you a $200M surplus. And then there's Chase Headley, Matt Carpenter, Brandon Belt, Jason Kipnis and Matt Adams. The commonality? All college position players easily identified as likely major leaguers purely through statistical analysis. You can also do statistical analysis for college pitchers, of course, but ideally you'd also want velocities. These are frequently available through public sources, but you may have to put them together manually. We'll also find that GB/FB ratios are important. There's plenty of public data available. I've made yearly NCAA college baseball data available in my public baseball GitHub account; it covers 2002-2014, which is plenty of data for analysis. Older years are also available, but only in PDF format. So you'll either have to enter the data manually, use a service or do some high-quality automated OCR. My repository also includes NCAA play-by-play data from several sources, which among other things is useful for building catcher framing and defensive estimates. Also publicly available, and will be available in my GitHub over the next several days: 1. NAIA - roughly NCAA D2 level 2. NJCAA - junior college, same rules as NCAA 3. CCCAA - junior college 4. NWAACC - junior college Prospects come out of the NAIA and NCAA D2/D3 divisions every year, and with the free agent market valuing a single win at around$7M you want to make sure you don't overlook any player with talent. With JUCO players you'd like to identify that sleeper before he transfers to an NCAA D1 and has a huge year. Later you'll also want to analyze:
1. Summer leagues
2. Independent leagues
We'll start by looking at what data is available and how to combine the data sets. There are always player transfers to identify, and NCAA teams frequently play interdivision games as well as NAIA teams. We'll want to build a predictive model that identifies the most talented players uniformly across all leagues, so this will be a boring but necessary step.

### A Very Rough Guide to Getting Started in Data Science: Part II, The Big Picture

Data science to a beginner seems completely overwhelming. Not only are there huge numbers of programming languages, packages and algorithms, but even managing your data is an entire area itself. Some examples are the languages R, Python, Ruby, Perl, Julia, Mathematica, MATLAB/Octave; packages SAS, STATA, SPSS; algorithms linear regression, logistic regression, nested model, neural nets, support vector machines, linear discriminant analysis and deep learning.
For managing your data some people use Excel, or a relational database like MySQL or PostgreSQL. And where do things like big data, NoSQL and Hadoop fit in? And what's gradient descent and why is it important? But perhaps the most difficult part of all is that you actually need to know and understand statistics, too.

It does seem overwhelming, but there's a simple key idea - data science is using data to answer a question. Even if you're only sketching a graph using a stick and a sandbox, you're still doing data science. Your goal for data science should be to continually learn better, more powerful and more efficient ways to answer your questions. My general framework has been strongly influenced by George Pólya's wonderful book "How to Solve It". While it's directed at solving mathematical problems, his approach is helpful for solving problems in general.

"How to Solve It" suggests the following steps when solving a mathematical problem:
1. First, you have to understand the problem.
2. After understanding, then make a plan.
3. Carry out the plan.
4. Review/extend. Look back on your work. How could it be better?
Pólya goes into much greater detail for each step and provides some illustrative examples. It's not the final word on how to approach and solve mathematical problems, but it's very helpful and I highly recommend it. For data science, the analogous steps from my perspective would be:
1. What questions do you want to answer?
2. What data would be helpful to answer these questions? How and where do you get this data?
3. Given the question you want to answer and the data you have, which approaches and models are likely to be useful? This can be very confusing. There are always tradeoffs - underfitting vs overfitting, bias vs variance, simplicity vs complexity, information about where something came from vs what's it doing
4. Perform analysis/fit model.
5. How do you know if your model and analysis are good or bad, and how confident should you be in your predictions and conclusions? A very critical, but commonly treated lightly or even skipped entirely.
6. Given the results, what should you try next?
Let's follow Pólya and do an illustrative example next.

## Tuesday, February 10, 2015

### More Measles: Vaccination Rates and School Funding

I took a look at California's personal belief exemption rate (PBE) for kindergarten vaccinations in Part I. California also provides poverty information for public schools through the Free or Reduced Price Meals data sets, both of which conveniently include California's school codes. Cleaned versions of these data sets and my R code are in my vaccination GitHub.

We can use the school code as a key to join these two data sets. But remember, the FRPM data set only includes data about public schools, so we'll have to retain the private school data for PBEs by doing what's called a left outer join. This still performs a join on the school code key, but if any school codes included in the left data don't have corresponding entries in the right data set we still retain them. The missing values for the right data set in this case are set to NULL.

We can perform a left outer join in R by using "merge" with the option "all.x=TRUE". I'll start by looking at how the PBE rate varies between charter, non-charter public and private schools, so we'll need to replace those missing values for funding source after our join. If the funding source is missing, it's a private school. The FRPM data also denotes non-charter public schools with funding type "", so I'll replace those with "aPublic" for convenience. For factors, R will by default set the reference level to be the level that comes first alphabetically.

Here's a subset of the output. The addition of the funding source is an improvement over the model that doesn't include it, and the estimates for the odds ratios for funding source is the highest for directly funded charter schools, followed by locally funded charter schools and private schools. Remember, public schools are the reference level, so for the public level $$\log(\text{odds ratio}) = 0$$. Everything else being equal, our odds ratio estimates based on funding source would be: \begin{align*}
\mathrm{OR}_{\text{public}} &= e^{-3.820}\times e^{0} &= 0.022\\
\mathrm{OR}_{\text{private}} &= e^{-3.820}\times e^{0.752} &= 0.047\\
\mathrm{OR}_{\text{charter-local}} &= e^{-3.820}\times e^{1.049} &= 0.063\\
\mathrm{OR}_{\text{charter-direct}} &= e^{-3.820}\times e^{1.348} &= 0.085
\end{align*}
Converting to estimated PBE rates, we get: \begin{align*}
\mathrm{PBE}_{\text{public}} &= \frac{0.022}{1+0.022} &= 0.022\\
\mathrm{PBE}_{\text{private}} &= \frac{0.047}{1+0.047} &= 0.045\\
\mathrm{PBE}_{\text{charter-local}} &= \frac{0.063}{1+0.063} &= 0.059\\
\mathrm{PBE}_{\text{charter-direct}} &= \frac{0.085}{1+0.085} &= 0.078
\end{align*}