Skip to main content

Probability and Cumulative Dice Sums

Baseball's Billion Dollar Equation

In 1999 Voros McCracken infamously speculated about the amount of control the pitcher had over balls put in play. Not so much, as it turned out, and DIPS was born. It's tough to put a value on something like DIPS, but if an MLB team had developed and exploited it for several years, it could potentially have been worth hundreds of millions of dollars. Likewise, catcher framing could easily have been worth hundreds of millions.

How about a billion dollar equation? Sure, look at the baseball draft. An 8th round draft pick like Paul Goldschmidt could net you a $200M surplus. And then there's Chase Headley, Matt Carpenter, Brandon Belt, Jason Kipnis and Matt Adams. The commonality? All college position players easily identified as likely major leaguers purely through statistical analysis. You can also do statistical analysis for college pitchers, of course, but ideally you'd also want velocities. These are frequently available through public sources, but you may have to put them together manually. We'll also find that GB/FB ratios are important.

There's plenty of public data available. I've made yearly NCAA college baseball data available in my public baseball GitHub account; it covers 2002-2014, which is plenty of data for analysis. Older years are also available, but only in PDF format. So you'll either have to enter the data manually, use a service or do some high-quality automated OCR. My repository also includes NCAA play-by-play data from several sources, which among other things is useful for building catcher framing and defensive estimates.

Also publicly available, and will be available in my GitHub over the next several days:
  1. NAIA - roughly NCAA D2 level
  2. NJCAA - junior college, same rules as NCAA
  3. CCCAA - junior college
  4. NWAACC - junior college
Prospects come out of the NAIA and NCAA D2/D3 divisions every year, and with the free agent market valuing a single win at around $7M you want to make sure you don't overlook any player with talent. With JUCO players you'd like to identify that sleeper before he transfers to an NCAA D1 and has a huge year. Later you'll also want to analyze:
  1. Summer leagues
  2. Independent leagues
We'll start by looking at what data is available and how to combine the data sets. There are always player transfers to identify, and NCAA teams frequently play interdivision games as well as NAIA teams. We'll want to build a predictive model that identifies the most talented players uniformly across all leagues, so this will be a boring but necessary step.

Comments

Popular posts from this blog

A Bayes' Solution to Monty Hall

For any problem involving conditional probabilities one of your greatest allies is Bayes' Theorem . Bayes' Theorem says that for two events A and B, the probability of A given B is related to the probability of B given A in a specific way. Standard notation: probability of A given B is written \( \Pr(A \mid B) \) probability of B is written \( \Pr(B) \) Bayes' Theorem: Using the notation above, Bayes' Theorem can be written:  \[ \Pr(A \mid B) = \frac{\Pr(B \mid A)\times \Pr(A)}{\Pr(B)} \] Let's apply Bayes' Theorem to the Monty Hall problem . If you recall, we're told that behind three doors there are two goats and one car, all randomly placed. We initially choose a door, and then Monty, who knows what's behind the doors, always shows us a goat behind one of the remaining doors. He can always do this as there are two goats; if we chose the car initially, Monty picks one of the two doors with a goat behind it at random. Assume we pick Door 1 an...

Simplified Multinomial Kelly

Here's a simplified version for optimal Kelly bets when you have multiple outcomes (e.g. horse races). The Smoczynski & Tomkins algorithm, which is explained here (or in the original paper): https://en.wikipedia.org/wiki/Kelly_criterion#Multiple_horses Let's say there's a wager that, for every $1 you bet, will return a profit of $b if you win. Let the probability of winning be \(p\), and losing be \(q=1-p\). The original Kelly criterion says to wager only if \(b\cdot p-q > 0\) (the expected value is positive), and in this case to wager a fraction \( \frac{b\cdot p-q}{b} \) of your bankroll. But in a horse race, how do you decide which set of outcomes are favorable to bet on? It's tricky, because these wagers are mutually exclusive i.e. you can win at most one. It turns out there's a simple and intuitive method to find which bets are favorable: 1) Look at \( b\cdot p-q\) for every horse. 2) Pick any horse for which \( b\cdot p-q > 0\) and mar...

Probability and Cumulative Dice Sums

Let a die be labeled with increasing positive integers \(a_1,\ldots , a_n\), and let the probability of getting \(a_i\) be \(p_i>0\). We start at 0 and roll the die, adding whatever number we get to the current total. If \({\rm Pr}(N)\) is the probability that at some point we achieve the sum \(N\), then \(\lim_{N \to \infty} {\rm Pr}(N)\) exists and equals \(1/\rm{E}(X)\) iff \((a_1, \ldots, a_n) = 1\). The direction \(\implies\) is obvious. Now, if the limit exists it must equal \(1/{\rm E}(X)\) by Chebyshev's inequality, so we only need to show that the limit exists assuming that \((a_1, \ldots, a_n) = 1\). We have the recursive relationship \[{\rm Pr}(N) = p_1 {\rm Pr}(N-a_1) + \ldots + p_n {\rm Pr}(N-a_n);\] the characteristic polynomial is therefore \[f(x) = x^{a_n} - \left(p_1 x^{(a_n-a_1)} + \ldots + p_n\right).\] This clearly has the root \(x=1\). Next note \[ f'(1) = a_n - \sum_{i=1}^{n} p_i a_n + \sum_{i=1}^{n} p_i a_i = \rm{E}(X) > 0 ,\] hence this root is als...