## Wednesday, February 11, 2015

### Baseball's Billion Dollar Equation

In 1999 Voros McCracken infamously speculated about the amount of control the pitcher had over balls put in play. Not so much, as it turned out, and DIPS was born. It's tough to put a value on something like DIPS, but if an MLB team had developed and exploited it for several years, it could potentially have been worth hundreds of millions of dollars. Likewise, catcher framing could easily have been worth hundreds of millions.

How about a billion dollar equation? Sure, look at the baseball draft. An 8th round draft pick like Paul Goldschmidt could net you a $200M surplus. And then there's Chase Headley, Matt Carpenter, Brandon Belt, Jason Kipnis and Matt Adams. The commonality? All college position players easily identified as likely major leaguers purely through statistical analysis. You can also do statistical analysis for college pitchers, of course, but ideally you'd also want velocities. These are frequently available through public sources, but you may have to put them together manually. We'll also find that GB/FB ratios are important. There's plenty of public data available. I've made yearly NCAA college baseball data available in my public baseball GitHub account; it covers 2002-2014, which is plenty of data for analysis. Older years are also available, but only in PDF format. So you'll either have to enter the data manually, use a service or do some high-quality automated OCR. My repository also includes NCAA play-by-play data from several sources, which among other things is useful for building catcher framing and defensive estimates. Also publicly available, and will be available in my GitHub over the next several days: 1. NAIA - roughly NCAA D2 level 2. NJCAA - junior college, same rules as NCAA 3. CCCAA - junior college 4. NWAACC - junior college Prospects come out of the NAIA and NCAA D2/D3 divisions every year, and with the free agent market valuing a single win at around$7M you want to make sure you don't overlook any player with talent. With JUCO players you'd like to identify that sleeper before he transfers to an NCAA D1 and has a huge year. Later you'll also want to analyze:
1. Summer leagues
2. Independent leagues
We'll start by looking at what data is available and how to combine the data sets. There are always player transfers to identify, and NCAA teams frequently play interdivision games as well as NAIA teams. We'll want to build a predictive model that identifies the most talented players uniformly across all leagues, so this will be a boring but necessary step.

### A Very Rough Guide to Getting Started in Data Science: Part II, The Big Picture

Data science to a beginner seems completely overwhelming. Not only are there huge numbers of programming languages, packages and algorithms, but even managing your data is an entire area itself. Some examples are the languages R, Python, Ruby, Perl, Julia, Mathematica, MATLAB/Octave; packages SAS, STATA, SPSS; algorithms linear regression, logistic regression, nested model, neural nets, support vector machines, linear discriminant analysis and deep learning.
For managing your data some people use Excel, or a relational database like MySQL or PostgreSQL. And where do things like big data, NoSQL and Hadoop fit in? And what's gradient descent and why is it important? But perhaps the most difficult part of all is that you actually need to know and understand statistics, too.

It does seem overwhelming, but there's a simple key idea - data science is using data to answer a question. Even if you're only sketching a graph using a stick and a sandbox, you're still doing data science. Your goal for data science should be to continually learn better, more powerful and more efficient ways to answer your questions. My general framework has been strongly influenced by George Pólya's wonderful book "How to Solve It". While it's directed at solving mathematical problems, his approach is helpful for solving problems in general.

"How to Solve It" suggests the following steps when solving a mathematical problem:
1. First, you have to understand the problem.
2. After understanding, then make a plan.
3. Carry out the plan.
4. Review/extend. Look back on your work. How could it be better?
Pólya goes into much greater detail for each step and provides some illustrative examples. It's not the final word on how to approach and solve mathematical problems, but it's very helpful and I highly recommend it. For data science, the analogous steps from my perspective would be:
1. What questions do you want to answer?
2. What data would be helpful to answer these questions? How and where do you get this data?
3. Given the question you want to answer and the data you have, which approaches and models are likely to be useful? This can be very confusing. There are always tradeoffs - underfitting vs overfitting, bias vs variance, simplicity vs complexity, information about where something came from vs what's it doing
4. Perform analysis/fit model.
5. How do you know if your model and analysis are good or bad, and how confident should you be in your predictions and conclusions? A very critical, but commonly treated lightly or even skipped entirely.
6. Given the results, what should you try next?
Let's follow Pólya and do an illustrative example next.

## Tuesday, February 10, 2015

### More Measles: Vaccination Rates and School Funding

I took a look at California's personal belief exemption rate (PBE) for kindergarten vaccinations in Part I. California also provides poverty information for public schools through the Free or Reduced Price Meals data sets, both of which conveniently include California's school codes. Cleaned versions of these data sets and my R code are in my vaccination GitHub.

We can use the school code as a key to join these two data sets. But remember, the FRPM data set only includes data about public schools, so we'll have to retain the private school data for PBEs by doing what's called a left outer join. This still performs a join on the school code key, but if any school codes included in the left data don't have corresponding entries in the right data set we still retain them. The missing values for the right data set in this case are set to NULL.

We can perform a left outer join in R by using "merge" with the option "all.x=TRUE". I'll start by looking at how the PBE rate varies between charter, non-charter public and private schools, so we'll need to replace those missing values for funding source after our join. If the funding source is missing, it's a private school. The FRPM data also denotes non-charter public schools with funding type "", so I'll replace those with "aPublic" for convenience. For factors, R will by default set the reference level to be the level that comes first alphabetically.

Here's a subset of the output. The addition of the funding source is an improvement over the model that doesn't include it, and the estimates for the odds ratios for funding source is the highest for directly funded charter schools, followed by locally funded charter schools and private schools. Remember, public schools are the reference level, so for the public level $$\log(\text{odds ratio}) = 0$$. Everything else being equal, our odds ratio estimates based on funding source would be: \begin{align*}
\mathrm{OR}_{\text{public}} &= e^{-3.820}\times e^{0} &= 0.022\\
\mathrm{OR}_{\text{private}} &= e^{-3.820}\times e^{0.752} &= 0.047\\
\mathrm{OR}_{\text{charter-local}} &= e^{-3.820}\times e^{1.049} &= 0.063\\
\mathrm{OR}_{\text{charter-direct}} &= e^{-3.820}\times e^{1.348} &= 0.085
\end{align*}
Converting to estimated PBE rates, we get: \begin{align*}
\mathrm{PBE}_{\text{public}} &= \frac{0.022}{1+0.022} &= 0.022\\
\mathrm{PBE}_{\text{private}} &= \frac{0.047}{1+0.047} &= 0.045\\
\mathrm{PBE}_{\text{charter-local}} &= \frac{0.063}{1+0.063} &= 0.059\\
\mathrm{PBE}_{\text{charter-direct}} &= \frac{0.085}{1+0.085} &= 0.078
\end{align*}

## Saturday, February 7, 2015

### Mere Measles: A Look at Vaccination Rates in California, Part I

California is currently at the epicenter of a measles outbreak, a disease that was considered all but eradicated in the US as of a few years ago. Measles is a nasty disease; it's easily transmitted and at its worst can cause measles encephalitis, leading to brain damage or even death.

The increasing problem in the US with the measles, whooping cough and other nearly-eradicated diseases stems from a liberal personal belief exemption policy in California and other states. This wasn't a major problem until Andrew Wakefield famously and fraudulently tied autism to the MMR vaccine. This has led to thousands of unnecessary deaths as well as needless misery for thousands more. I myself caught a case of the whooping cough in San Diego a few years ago as a consequence of Wakefield's fraud. I've had several MMR vaccines over my life, but adults may still only be partially immune; this is yet another reason why a healthy level of herd immunity is so critical to maintain.

PBE rates in California may be relatively low, but the problem is that parents who are likely to seek a PBE exemption for the MMR vaccine tend to cluster, making them susceptible to outbreaks of highly infectious diseases such as the measles. As we'll see they cluster in private schools, they cluster in particular cites and they cluster in particular counties.

California makes vaccination and PBE (personal belief exemption) information available here (indexed by school code):

Immunization Levels in Child Care and Schools

California also makes student poverty information available here (also indexed by school code):

Student Poverty - FRPM Data

This is typical government data - Excel spreadsheets, bits of non-data all over the place. I've cleaned up both data sets for 2013-2014 here:

California kindergarten and poverty data for 2013-2014

Let's start with a basic nested model using only the vaccination data to examine the PBE rate for public vs private schools:

We get these nice ANOVA results that indicates the public and private status of schools does indeed add to our PBE model.

What's the estimated impact of public vs private? This is a nice illustration of why the odds ratio is so easy to use in logistic models with logit links. Here our the fixed effect estimates:

To get PBE rate estimates for average public and private schools, take the exponential of the estimates to get the odds ratio for the estimate. \begin{align*}
\mathrm{Intercept} &= e^{-3.113} = 0.044\\
\mathrm{public} &= e^{-0.606} = 0.546\\
\mathrm{private} &= e^{0} = 1.00
\end{align*}
The nice thing about odds ratios is that they simply multiply. What's our estimate for the odds ratio for the PBE rate of a public school? It's $$0.044\times 0.546 = 0.024$$; for a private school it's $$0.044\times 1.00 = 0.044$$. Translating to PBE rates we get $$\frac{0.024}{1+0.024} = 0.023$$ for public schools and $$\frac{0.044}{1+0.044} = 0.042$$ for private schools.

Of course, the odds ratios for counties vary quite a bit, as do the odds ratios for cities within each county. You can see the $$\log({\mathrm{odds}})$$ values for all of California's counties here and cities here.

For another example, let's do a private school in Beverly Hills, which is in Los Angeles County. The odds ratio for Beverly Hills is $$3.875$$; for Los Angeles County it's 0.625. Chaining as before, we get the overall estimate PBE odds ratio for a private school in Beverly Hills, Los Angeles County to be $$0.044\times 1.00\times 0.625\times 3.875 = 0.107$$. Translated to an estimated PBE rate this is $$\frac{0.107}{1+0.107} = 0.096$$. Thus, we'd expect about 10% of the children in a private Beverly Hills kindergarten to be unvaccinated due to personal belief exemptions. This is well below the desired herd immunity level for measles of 95%.

In the next part I'll show how to merge the information in California's vaccination data and poverty data to examine the role of other factors in the clustering of unvaccinated children. As an example, charter schools in California tend to have even higher rates of personal belief exemptions than private schools.

### Touring Waldo; Overfitting Waldo; Scanning Waldo; Waldo, Waldo, Waldo

Randal Olson has written a nice article on finding Waldo - Here’s Waldo: Computing the optimal search strategy for finding Waldo. Randal presents a variety of machine learning methods to find very good search paths among the 68 known locations of Waldo. Of course, there's no need for an approximation; modern algorithms can optimize tiny problems like these exactly.

One approach would be to treat this as a traveling salesman problem with Euclidean distances as edge weights, but you'll need to add a dummy node that has edge weight 0 to every node. Once you have the optimal tour, delete the dummy node and you have your optimal Hamiltonian path.

I haven't coded in the dummy node yet, but here's the Waldo problem as a traveling salesman problem using TSPLIB format.

The Condorde software package optimizes this in a fraction of a second:

I'll be updating this article to graphically show you the results for the optimal Hamiltonian path. There are also many additional questions I'll address. Do we really want to use this as our search path? We're obviously overfitting. Do we want to assume Waldo will never appear in a place he hasn't appeared before? When searching for Waldo we see an entire little area, not a point, so a realistic approach would be to develop a scanning algorithm that covers the entire image and accounts for our viewing point and posterior Waldo density. We can also jump where we're looking at from point to point quickly while not searching for Waldo, but scans are much slower.

## Friday, February 6, 2015

### Short Notes: Get CUDA and gputools Running on Ubuntu 14.10

Here's a basic guide for getting CUDA 7.0 and the R package gputools running perfectly under Ubuntu 14.10. It's not difficult, but there are a few issues and this will be helpful to have in a single place.

If you're running Ubuntu 14.10, I'd recommend installing CUDA 7.0. NVIDIA has a 7.0 Debian package specifically for 14.10; this wasn't the case for CUDA 6.5, which only had a Debian package for 14.04.

To get access to CUDA 7.0, you'll first need to register as a CUDA developer.

Join The CUDA Registered Developer Program

Once you have access, navigate to the CUDA 7.0 download page and get the Debian package.

You'll either need to be running the NVIDIA 340 or 346 drivers. If you're having trouble upgrading, I'd suggest adding the xorg-edgers PPA.

Once your NVIDIA driver is set, install the CUDA 7.0 Debian package you've downloaded. Don't forget to remove any previously installed CUDA packages or repositories.

You'll need to add paths so everything knows where CUDA is installed. Append the following to the .bashrc in your home directory:

Execute "source ~/.bashrc" for these changes to be applied. If you want to test your new CUDA install, make the samples provided by NVIDIA.

I get the following output when running BlackScholes:

The next task is to install gputools for R. You can't unfortunately install the current package through R, as the source code contains references to CUDA architectures that are obsolete under CUDA 7.0. But that's easy to fix.

Now do some editing in gputools/src/Makefile:

Now build and install the patched gputools package while you're in the directory immediately above gputools:

If you want to make the gputools packages available for all R users

Keep in mind that they'll have to make the same environmental variable changes as above. Let's test it!

Running gives us:

A nice 26-fold speedup. We're all set!

## Wednesday, January 21, 2015

### Introduction

Data science is a very hot, perhaps the hottest, field right now. Sports analytics has been my primary area of interest, and it's a field that has seen amazing growth in the last decade. It's no surprise that the most common question I'm asked is about becoming a data scientist. This will be a first set of rough notes attempting to answer this question from my own personal perspective. Keep in mind that this is only my opinion and there are many different ways to do data science and become a data scientist.

Data science is using data to answer a question. This could be doing something as simple as making a plot by hand, or using Excel to take the average of a set of numbers. The important parts of this process are knowing which questions to ask, deciding what information you'd need to answer it, picking a method that takes this data and produces results relevant to your question and, most importantly, how to properly interpret these results so you can be confident that they actually answer your question.

Knowing the questions requires some domain expertise, either yours or someone else's. Unless you're a data science researcher, data science is a tool you apply to another domain.

If you have the data you feel should answer your question, you're in luck. Frequently you'll have to go out and collect the data yourself, e.g. scraping from the web. Even if you already have the data, it's common to have to process the data to remove bad data, correct errors and put it into a form better suited for analysis. A popular tool for this phase of data analysis is a scripting language; typically something like Python, Perl or Ruby. These are high-level programming languages that very good at web work as well as manipulating data.

If you're dealing with a large amount of data, you'll find that it's convenient to store it in a structured way that makes it easier to access, manipulate and update in the future. This will typically be a relational database of some type, such as PostgreSQL, MySQL or SQL Server. These all use the programming language SQL.

Methodology and interpretation are the most difficult, broadest and most important parts of data science. You'll see methodology referenced as statistical learning, machine learning, artificial intelligence and data mining; these can be covered in statistics, computer science, engineering or other classes. Interpretation is traditionally the domain of statistics, but this is always taught together with methodology.

You can start learning much of this material freely and easily with MOOCs. Here's an initial list.

### MOOCs

#### Data Science Basics

Johns Hopkins: The Data Scientist’s Toolbox. Overview of version control, markdown, git, GitHub, R, and RStudio. Started January 5, 2015. Coursera.

Johns Hopkins: R Programming. R-based. Started January 5, 2015. Coursera.

#### Scripting Languages

Intro to Computer Science. Python-based. Take anytime. Udacity; videos and exercises are free.

Programming Foundations with Python. Python-based. Take anytime. Udacity; videos and exercises are free.

MIT: Introduction to Computer Science and Programming Using Python. Python-based. Class started January 9, 2015. edX.

#### Databases and SQL

Stanford: Introduction to Databases. XML, JSON, SQL; uses SQLite for SQL. Self-paced. Coursera.

#### Machine Learning

Stanford: Machine Learning. Octave-based. Class started January 19, 2015. Coursera.

Stanford: Statistical Learning. R-based. Class started January 19, 2015. Stanford OpenEdX.