tag:blogger.com,1999:blog-9149402429183581490Wed, 29 Jul 2015 11:55:52 +0000The Angry Statisticianhttp://angrystatistician.blogspot.com/noreply@blogger.com (Christopher Long)Blogger58125tag:blogger.com,1999:blog-9149402429183581490.post-3689344049108158170Sun, 12 Jul 2015 02:54:00 +00002015-07-11T20:08:13.781-07:00Power Rankings: Looking at a Very Simple MethodOne of the simplest and most common power ranking models is known as the <a href="https://en.wikipedia.org/wiki/Bradley%E2%80%93Terry_model" target="_blank">Bradley-Terry-Luce model</a>, which is <a href="http://angrystatistician.blogspot.com/2013/03/baseball-chess-psychology-and.html" target="_blank">equivalent to other famous models</a> such the <a href="https://en.wikipedia.org/wiki/Logistic_regression" target="_blank">logistic model</a> and the <a href="https://en.wikipedia.org/wiki/Elo_rating_system" target="_blank">Elo rating system</a>. I'll be referring to "teams" here, but of course the same ideas apply to any two-participant game.<br /><br />Let me clarify what I mean when I use the term "power ranking". A power ranking supplies not only a ranking of teams, but also provides numbers that may be used to estimate the probabilities of various outcomes were two particular teams to play a match.<br /><br />In the BTL power ranking system we assume the teams have some latent (hidden/unknown) "strength" \(R_i\), and that the probability of \(i\) beating \(j\) is \( \frac{R_i}{R_i+R_j} \). Note that each \(R_i\) is assumed to be strictly positive. Where does this model structure come from?<br /><br />Here are three reasonable constraints for a power ranking model:<br /><ol><li> If \(R_i\) and \(R_j\) have equal strength, the probability of one beating the other should be \( \frac{1}{2}\).</li><li>As the strength of one team strictly approaches 0 (infinitely weak) with the other team fixed, the probability of the other team winning strictly increases to 1.</li><li>As the strength of one team strictly approaches 1 (infinitely strong) with the other team fixed, the probability of the other team winning strictly decreases to 0.</li></ol><div>Note that our model structure satisfies all three constraints. Can you think of other simple model structures that satisfy all three constraints?</div><div><br /></div><div>Given this model and a set of teams and match results, how can we estimate the \(R_i\). The <a href="https://en.wikipedia.org/wiki/Maximum_likelihood" target="_blank">maximum-likelihood estimators</a> are the set of \( R_i \) that maximizes the probability of the observed outcomes actually happening. For any given match this probability of team \( i \) beating team \( j \) is \( \frac{R_i}{R_i+R_j} \), so the overall probability of the observed outcomes of the matches \( M \) occurring is \[ \mathcal{L} = \prod_{m\in M} \frac{R_{w(m)}}{R_{w(m)}+R_{l(m)}},\] where \( w(m) \) is then winner and \( l(m) \) the loser of match \( m \). We can transform this into a sum by taking logarithms; \[ \log\left( \mathcal{L} \right) = \log\left(R_{w(m)}\right) - \log\left(R_{w(m)}+R_{l(m)}\right).\] Before going further, let's make a useful reparameterization by setting \( e^{r_i} = R_i \); this makes sense as we're requiring the \( R_i \) to be strictly positive. We then get \[ \log\left( \mathcal{L} \right) = r_{w(m)} - \log\left(e^{r_{w(m)}}+e^{r_{l(m)}}\right).\] Taking partial derivatives we get \begin{eqnarray*}<br />\frac{\partial \log\left( \mathcal{L} \right)}{\partial r_i} &=& \sum_{w(m)=i} 1 - \frac{e^{r_{w(m)}}}{e^{r_{w(m)}}+e^{r_{l(m)}}} + \sum_{l(m)=i} - \frac{e^{r_{l(m)}}}{e^{r_{w(m)}}+e^{r_{l(m)}}}\\<br />&=& \sum_{w(m)=i} 1 - \frac{e^{r_i}}{e^{r_i}+e^{r_{l(m)}}} + \sum_{l(m)=i} - \frac{e^{r_i}}{e^{r_{w(m)}}+e^{r_i}}\\<br />&=&0.<br />\end{eqnarray*} But this is just the number of actual wins minus the expected wins! Thus, the maximum likelihood estimators for the \( r_i \) satisfy \( O_i = E_i \) for all teams \( i \), where \( O_i \) is the actual (observed) number of wins for team \( i \), and \( E_i \) is the expected number of wins for team \( i \) based on our model. That's a nice property!<br /><br />If you'd like to experiment with some actual data, and to see that the resulting fit does indeed satisfy this property, here's an <a href="https://github.com/octonion/hockey/tree/master/lunchtime" target="_blank">example BTL model using NCAA men's ice hockey scores</a>. You can, of course, actually use this property to iteratively solve for the MLE estimators \( R_i \). Note that you'll have to fix one of the \( R_i \) to be a particular value (or add some other constraint), as the model probabilities are invariant with respect to multiplication of the \( R_i \) by the same positive scalar.</div>http://angrystatistician.blogspot.com/2015/07/power-rankings-looking-at-very-simple.htmlnoreply@blogger.com (Christopher Long)0tag:blogger.com,1999:blog-9149402429183581490.post-8665826341097287811Sat, 11 Jul 2015 22:00:00 +00002015-07-11T15:00:38.036-07:00Getting Started Doing Baseball Analysis without CodingThere's lot of confusion about how best to get started doing baseball analysis. <a href="http://www.dataphoric.com/2015/06/27/learn_data_science_the_hard_way/" target="_blank">It doesn't have to be difficult!</a> You can start doing it right away, even if you don't know anything about R, Python, Ruby, SQL or machine learning (most GMs can't code). Learning these and other tools makes it easier and faster to do analysis, but they're only part of the process of constructing a well-reasoned argument. They're important, of course, because they can turn 2 months of hard work into 10 minutes of typing. Even if you don't like mathematics, statistics, coding or databases, they're mundane necessities that can make your life much easier and your analysis more powerful.<br /><br />Here are two example problems. You don't have to do these specifically, but they illustrate the general idea. Write up your solutions, then publish them for other people to make some (hopefully) helpful comments and suggestions. This can be on a blog or through a versioning control platform like <a href="https://github.com/" target="_blank">GitHub</a> (which is also great for versioning any code or data your use). Try to write well! A great argument, but poorly written and poorly presented isn't going to be very convincing. Once it's finished, review and revise, review and revise, review and revise. When a team you follow makes a move, treat it as a puzzle for you to solve. Why did they do it, and was it a good idea?<br /><ol><li>Pick a recent baseball trade. For example, the Padres traded catcher Yasmani Grandal for Dodgers outfielder Matt Kemp. It's never that simple of course; the Padres aren't paying all of Matt Kemp's salary. Find out what the salary obligations were for each club in this trade. Using your favorite public projection system, where were the projected surplus values for each player at the time of the trade? Of course, there were <a href="http://www.si.com/mlb/2014/12/18/matt-kemp-arthritic-hips-dodgers-padres-trade" target="_blank">other players</a> involved in that trade, too. What were the expected surplus values of those players? From the perspective of surplus values, who won or lost this trade? Finally, why do you think each team made this trade, especially considering that they were division rivals? Do you think one or both teams made any mistakes in reasoning; if so, what were they, and did the other team take advantage of those mistakes?</li><li>Pick any MLB team and review the <a href="http://mlb.mlb.com/mlb/events/draft/y2015/drafttracker.jsp" target="_blank">draft picks they made in the 2015 draft</a> for the first 10 rounds. Do you notice any trends or <a href="http://mlb.mlb.com/mlb/events/draft/y2014/drafttracker.jsp" target="_blank">changes from the 2014 draft</a>? Do these picks agree or disagree with the various public pre-draft player rankings? Which picks were designed to save money to help sign other picks? Identify those tough signs. Was the team actually able to sign them, and were the picks to save money still reasonably good picks? Do you best to identify which picks you thought were good and bad, write them down in a notebook with your reasoning, then check back in 6 months and a year. Was your reasoning correct? If not, what were your mistakes and how can you avoid making them in the future?</li></ol>http://angrystatistician.blogspot.com/2015/07/getting-started-doing-baseball-analysis.htmlnoreply@blogger.com (Christopher Long)0tag:blogger.com,1999:blog-9149402429183581490.post-8962656961286759513Mon, 09 Mar 2015 00:27:00 +00002015-03-08T17:27:27.572-07:00Some Potentially Useful SQL Resources<br />Some potentially useful SQL resources - explanations, visualizations, exercises, games, classes.<br /><ol><li><a href="http://blog.codinghorror.com/a-visual-explanation-of-sql-joins/" target="_blank">A Visual Explanation of SQL Joins</a></li><li><a href="http://datamonkey.pro/" target="_blank">Datamonkey</a></li><li><a href="https://my.vanderbilt.edu/cs265/" target="_blank">Introduction to Database Management Systems</a></li><li><a href="http://wwwlgis.informatik.uni-kl.de/cms/courses/informationssysteme/sqlisland/" target="_blank">SQL Island Adventure Game</a></li><li><a href="http://pgexercises.com/" target="_blank">PostgreSQL Exercises</a></li><li><a href="http://www.padjo.org/" target="_blank">Public Affairs Data Journalism</a></li><li><a href="https://github.com/rhc2104/sqlteaching" target="_blank">SQL Teaching's GitHub repo (if you're curious)</a></li><li><a href="https://class.stanford.edu/courses/DB/2014/SelfPaced/about" target="_blank">Stanford's Self-Paced Database MOOC</a></li><li><a href="http://hackr.io/tutorials/sql" target="_blank">Hackr.io's SQL Section (good to check occasionally)</a></li><li><a href="http://www.sql-ex.ru/" target="_blank">Practical skills of SQL language</a></li><li><a href="http://www.sqlteaching.com/" target="_blank">SQL Teaching (learn SQL in your browser)</a></li><li><a href="http://sqlzoo.net/wiki/Main_Page" target="_blank">SQLZOO - Interactive SQL Tutorial</a></li><li><a href="https://schemaverse.com/" target="_blank">The Schemaverse: a space-based strategy game implemented entirely within a PostgreSQL database</a></li><li><a href="http://blog.treasuredata.com/blog/2014/12/05/learn-sql-by-calculating-customer-lifetime-value-part-1/" target="_blank">Treasure Data: Learn SQL by Calculating Customer Lifetime Value</a></li></ol>http://angrystatistician.blogspot.com/2015/03/some-potentially-useful-sql-resources.htmlnoreply@blogger.com (Christopher Long)0tag:blogger.com,1999:blog-9149402429183581490.post-7862524207431434181Tue, 03 Mar 2015 08:56:00 +00002015-03-03T00:56:53.621-08:00Who Controls the Pace in Basketball, Offense or Defense?During a recent chat with basketball analyst <a href="https://twitter.com/sethpartnow" target="_blank">Seth Partnow</a>, he mentioned a topic that came up during a discussion at the recent <a href="http://www.sloansportsconference.com/" target="_blank">MIT Sloan Sports Analytics Conference</a>. Who controls the pace of a game more, the offense or defense? And what is the percentage of pace responsibility for each side? The analysts came up with a rough consensus opinion, but is there a way to answer this question analytically? I came up with one approach that examines the variations in possession times, but it suddenly occurred to me that this question could also be answered immediately by looking at the offense-defense asymmetry of the home court advantage.<div><br /></div><div>As you can see in the <a href="https://github.com/octonion/basketball-m/blob/master/ncaa/diagnostics/lmer.txt" target="_blank">R output of my NCAA team model code</a> in one of my <a href="https://github.com/octonion/basketball-m" target="_blank">public basketball repositories</a>, the offense at home scores points at a rate about \( e^{0.0302} = 1.031 \) times the rate on a neutral court, everything else the same. Likewise, the defense at home allows points at a rate about \( e^{-0.0165} = 0.984\) times the rate on a neutral court; in both cases the neutral court rate is the reference level. Notice the geometric asymmetry; \( 1.031\cdot 0.984 = 1.015 > 1\). The implication is that the offense is responsible for about the fraction \[ \frac{(1.031-1)}{(1.031-1)+(1-0.984)} = 0.66 \] of the scoring pace. That is, offensive controls 2/3 of the pace, defense 1/3 of the pace. The consensus opinion the analysts came up with at Sloan? It was 2/3 offense, 1/3 defense! It's nice when things work out, isn't it?</div><div><br /></div><div>I've used NCAA basketball because there are plenty of neutral court games; to examine the NBA directly we'll have to use a more sophisticated (but perhaps <a href="http://en.wikipedia.org/wiki/Mathematical_beauty" target="_blank">less beautiful</a>) approach involving the variation in possession times. I'll do that next, and I'll also show you how to apply this new information to make better <a href="https://github.com/octonion/basketball-m/blob/master/ncaa/sos/predict_daily.txt" target="_blank">game predictions</a>. Finally, there's a nice connection to some recent work on <a href="http://qz.com/316826/mathematicians-have-finally-figured-out-how-to-tell-correlation-from-causation/" target="_blank">inferring causality</a> that I'll outline.</div>http://angrystatistician.blogspot.com/2015/03/who-controls-pace-in-basketball-offense.htmlnoreply@blogger.com (Christopher Long)0tag:blogger.com,1999:blog-9149402429183581490.post-8170448064117049143Wed, 11 Feb 2015 22:17:00 +00002015-02-11T14:17:53.532-08:00Baseball's Billion Dollar EquationIn 1999 <a href="http://en.wikipedia.org/wiki/Voros_McCracken" target="_blank">Voros McCracken</a> infamously speculated about the amount of control the pitcher had over balls put in play. Not so much, as it turned out, and <a href="http://en.wikipedia.org/wiki/Defense_independent_pitching_statistics" target="_blank">DIPS</a> was born. It's tough to put a value on something like DIPS, but if an MLB team had developed and exploited it for several years, it could potentially have been worth hundreds of millions of dollars. Likewise, <a href="http://www.hardballtimes.com/the-little-technician-rene-rivera-craftsman-at-work/" target="_blank">catcher framing</a> could easily have been worth hundreds of millions.<br /><br />How about a billion dollar equation? Sure, look at the baseball draft. An 8th round draft pick like <a href="http://www.fangraphs.com/statss.aspx?playerid=9218" target="_blank">Paul Goldschmidt</a> could net you a $200M surplus. And then there's <a href="http://www.fangraphs.com/statss.aspx?playerid=4720" target="_blank">Chase Headley</a>, <a href="http://www.fangraphs.com/statss.aspx?playerid=8090" target="_blank">Matt Carpenter</a>, <a href="http://www.fangraphs.com/statss.aspx?playerid=10264" target="_blank">Brandon Belt</a>, <a href="http://www.fangraphs.com/statss.aspx?playerid=9776" target="_blank">Jason Kipnis</a> and <a href="http://www.fangraphs.com/statss.aspx?playerid=9393" target="_blank">Matt Adams</a>. The commonality? All college position players easily identified as likely major leaguers purely through statistical analysis. You can also do statistical analysis for college pitchers, of course, but ideally you'd also want velocities. These are frequently available through public sources, but you may have to put them together manually. We'll also find that GB/FB ratios are important.<br /><br />There's plenty of public data available. I've made yearly NCAA college baseball data available in my <a href="https://github.com/octonion/baseball-public" target="_blank">public baseball GitHub</a> account; it covers 2002-2014, which is plenty of data for analysis. Older years are also available, but only in PDF format. So you'll either have to enter the data manually, use a service or do some high-quality automated OCR. My repository also includes NCAA play-by-play data from several sources, which among other things is useful for building catcher framing and defensive estimates.<br /><br />Also publicly available, and will be available in my GitHub over the next several days:<br /><ol><li><a href="http://en.wikipedia.org/wiki/National_Association_of_Intercollegiate_Athletics" target="_blank">NAIA</a> - roughly NCAA D2 level</li><li><a href="http://en.wikipedia.org/wiki/National_Junior_College_Athletic_Association" target="_blank">NJCAA</a> - junior college, same rules as NCAA</li><li><a href="http://en.wikipedia.org/wiki/California_Community_College_Athletic_Association" target="_blank">CCCAA</a> - junior college</li><li><a href="http://en.wikipedia.org/wiki/Northwest_Athletic_Conference_(junior_college)" target="_blank">NWAACC</a> - junior college</li></ol><div>Prospects come out of the NAIA and NCAA D2/D3 divisions every year, and with the free agent market valuing a single win at around $7M you want to make sure you don't overlook any player with talent. With JUCO players you'd like to identify that sleeper before he transfers to an NCAA D1 and has a huge year. Later you'll also want to analyze:</div><div><ol><li>Summer leagues</li><li>Independent leagues</li></ol><div>We'll start by looking at what data is available and how to combine the data sets. There are always player transfers to identify, and NCAA teams frequently play interdivision games as well as NAIA teams. We'll want to build a predictive model that identifies the most talented players uniformly across all leagues, so this will be a boring but necessary step.</div></div>http://angrystatistician.blogspot.com/2015/02/baseballs-billion-dollar-equation.htmlnoreply@blogger.com (Christopher Long)0tag:blogger.com,1999:blog-9149402429183581490.post-4885721382300786171Wed, 11 Feb 2015 16:48:00 +00002015-02-11T08:48:34.913-08:00A Very Rough Guide to Getting Started in Data Science: Part II, The Big PictureData science to a beginner seems completely overwhelming. Not only are there huge numbers of programming languages, packages and algorithms, but even managing your data is an entire area itself. Some examples are the languages R, Python, Ruby, Perl, Julia, Mathematica, MATLAB/Octave; packages SAS, STATA, SPSS; algorithms linear regression, logistic regression, nested model, neural nets, support vector machines, linear discriminant analysis and deep learning.<br />For managing your data some people use Excel, or a relational database like MySQL or PostgreSQL. And where do things like big data, NoSQL and Hadoop fit in? And what's gradient descent and why is it important? But perhaps the most difficult part of all is that you actually need to know and understand statistics, too.<br /><br />It does seem overwhelming, but there's a simple key idea - <b>data science is using data to answer a question</b>. Even if you're only sketching a graph using a stick and a sandbox, you're still doing data science. Your goal for data science should be to continually learn better, more powerful and more efficient ways to answer your questions. My general framework has been strongly influenced by <a href="http://en.wikipedia.org/wiki/George_P%C3%B3lya" target="_blank">George Pólya</a>'s wonderful book <a href="http://en.wikipedia.org/wiki/How_to_Solve_It" target="_blank">"How to Solve It"</a>. While it's directed at solving mathematical problems, his approach is helpful for solving problems in general.<br /><br />"How to Solve It" suggests the following steps when solving a mathematical problem:<br /><ol><li>First, you have to understand the problem.</li><li>After understanding, then make a plan.</li><li>Carry out the plan.</li><li>Review/extend. Look back on your work. How could it be better?</li></ol><div>Pólya goes into much greater detail for each step and provides some illustrative examples. It's not the final word on how to approach and solve mathematical problems, but it's very helpful and I highly recommend it. For data science, the analogous steps from my perspective would be:</div><ol><li>What questions do you want to answer?</li><li>What data would be helpful to answer these questions? How and where do you get this data?</li><li>Given the question you want to answer and the data you have, which approaches and models are likely to be useful? This can be <b>very confusing</b>. There are always tradeoffs - underfitting vs overfitting, <a href="http://en.wikipedia.org/wiki/Bias%E2%80%93variance_tradeoff" target="_blank">bias vs variance</a>, simplicity vs complexity, <a href="http://en.wikipedia.org/wiki/Prior_probability" target="_blank">information about where something came from vs what's it doing</a>. </li><li>Perform analysis/fit model.</li><li>How do you know if your model and analysis are good or bad, and how confident should you be in your predictions and conclusions? A very critical, but commonly treated lightly or even skipped entirely.</li><li>Given the results, what should you try next?</li></ol>Let's follow Pólya and do an illustrative example next.http://angrystatistician.blogspot.com/2015/02/a-very-rough-guide-to-getting-started.htmlnoreply@blogger.com (Christopher Long)0tag:blogger.com,1999:blog-9149402429183581490.post-6914729618120541850Tue, 10 Feb 2015 23:18:00 +00002015-02-10T15:18:03.938-08:00More Measles: Vaccination Rates and School FundingI took a look at California's personal belief exemption rate (PBE) for kindergarten vaccinations in <a href="http://angrystatistician.blogspot.com/2015/02/mere-measles-look-at-vaccination-rates.html">Part I</a>. California also provides poverty information for public schools through the <a href="http://www.cde.ca.gov/ds/sd/sd/filessp.asp">Free or Reduced Price Meals data sets</a>, both of which conveniently include California's school codes. Cleaned versions of these data sets and my R code are in my <a href="https://github.com/octonion/vaccinations">vaccination GitHub</a>.<br /><br />We can use the school code as a key to join these two data sets. But remember, the FRPM data set only includes data about public schools, so we'll have to retain the private school data for PBEs by doing what's called a <a href="http://en.wikipedia.org/wiki/Join_%28SQL%29#Left_outer_join">left outer join</a>. This still performs a join on the school code key, but if any school codes included in the left data don't have corresponding entries in the right data set we still retain them. The missing values for the right data set in this case are set to <a href="http://en.wikipedia.org/wiki/Null_%28SQL%29">NULL</a>.<br /><br />We can perform a left outer join in R by using "merge" with the option "all.x=TRUE". I'll start by looking at how the PBE rate varies between charter, non-charter public and private schools, so we'll need to replace those missing values for funding source after our join. If the funding source is missing, it's a private school. The FRPM data also denotes non-charter public schools with funding type "", so I'll replace those with "aPublic" for convenience. For factors, R will by default set the reference level to be the level that comes first alphabetically.<br /><br /><script src="https://gist.github.com/octonion/546f5657e89cc72b367d.js"></script><br />Here's a subset of the output. The addition of the funding source is an improvement over the model that doesn't include it, and the estimates for the odds ratios for funding source is the highest for directly funded charter schools, followed by locally funded charter schools and private schools. Remember, public schools are the reference level, so for the public level \(\log(\text{odds ratio}) = 0\). Everything else being equal, our odds ratio estimates based on funding source would be: \begin{align*}<br />\mathrm{OR}_{\text{public}} &= e^{-3.820}\times e^{0} &= 0.022\\<br />\mathrm{OR}_{\text{private}} &= e^{-3.820}\times e^{0.752} &= 0.047\\<br />\mathrm{OR}_{\text{charter-local}} &= e^{-3.820}\times e^{1.049} &= 0.063\\<br />\mathrm{OR}_{\text{charter-direct}} &= e^{-3.820}\times e^{1.348} &= 0.085<br />\end{align*}<br />Converting to estimated PBE rates, we get: \begin{align*}<br />\mathrm{PBE}_{\text{public}} &= \frac{0.022}{1+0.022} &= 0.022\\<br />\mathrm{PBE}_{\text{private}} &= \frac{0.047}{1+0.047} &= 0.045\\<br />\mathrm{PBE}_{\text{charter-local}} &= \frac{0.063}{1+0.063} &= 0.059\\<br />\mathrm{PBE}_{\text{charter-direct}} &= \frac{0.085}{1+0.085} &= 0.078<br />\end{align*}<br /><script src="https://gist.github.com/octonion/52c4fb139ee83d40c519.js"></script>http://angrystatistician.blogspot.com/2015/02/more-measles-vaccination-rates-and.htmlnoreply@blogger.com (Christopher Long)0tag:blogger.com,1999:blog-9149402429183581490.post-555489984408974541Sun, 08 Feb 2015 03:43:00 +00002015-02-08T00:06:40.987-08:00Mere Measles: A Look at Vaccination Rates in California, Part ICalifornia is currently at the <a href="http://www.mercurynews.com/science/ci_27479456/california-measles-outbreak-state-cases-now-over-100">epicenter of a measles outbreak</a>, a disease that was considered all but eradicated in the US as of a few years ago. Measles is a nasty disease; it's easily transmitted and at its worst can cause measles encephalitis, leading to brain damage or even death.<br /><br />The increasing problem in the US with the measles, whooping cough and other nearly-eradicated diseases stems from a liberal <a href="http://www.shotsforschool.org/laws/exemptions/">personal belief exemption policy</a> in California and other states. This wasn't a major problem until <a href="http://en.wikipedia.org/wiki/Andrew_Wakefield">Andrew Wakefield</a> <a href="http://en.wikipedia.org/wiki/MMR_vaccine_controversy">famously and fraudulently tied autism to the MMR vaccine</a>. This has led to thousands of unnecessary deaths as well as needless misery for thousands more. I myself caught a case of the <a href="http://www.kpbs.org/news/2014/dec/22/record-number-whooping-cough-cases-san-diego-count/">whooping cough in San Diego</a> a few years ago as a consequence of Wakefield's fraud. I've had several MMR vaccines over my life, but adults may still only be partially immune; this is yet another reason why a healthy level of <a href="http://en.wikipedia.org/wiki/Herd_immunity">herd immunity</a> is so critical to maintain.<br /><br />PBE rates in California may be relatively low, but the problem is that parents who are likely to seek a PBE exemption for the MMR vaccine tend to cluster, making them susceptible to outbreaks of highly infectious diseases such as the measles. As we'll see they cluster in private schools, they cluster in particular cites and they cluster in particular counties.<br /><br />California makes vaccination and PBE (personal belief exemption) information available here (indexed by school code):<br /><br /><a href="http://www.cdph.ca.gov/programs/immunize/pages/immunizationlevels.aspx">Immunization Levels in Child Care and Schools</a><br /><br />California also makes student poverty information available here (also indexed by school code):<br /><br /><a href="http://www.cde.ca.gov/ds/sd/sd/filessp.asp">Student Poverty - FRPM Data</a><br /><br />This is typical government data - Excel spreadsheets, bits of non-data all over the place. I've cleaned up both data sets for 2013-2014 here:<br /><br /><a href="https://github.com/octonion/vaccinations/tree/master/data">California kindergarten and poverty data for 2013-2014</a><br /><br />Let's start with a basic nested model using only the vaccination data to examine the PBE rate for public vs private schools:<br /><br /><script src="https://gist.github.com/octonion/2d087e27f8199f0dd122.js"></script><br />We get these nice ANOVA results that indicates the public and private status of schools does indeed add to our PBE model.<br /><br /><script src="https://gist.github.com/octonion/30973d9c758d08de70d8.js"></script><br />What's the estimated impact of public vs private? This is a nice illustration of why the odds ratio is so easy to use in logistic models with logit links. Here our the fixed effect estimates:<br /><br /><script src="https://gist.github.com/octonion/7c061b8e0ed941bf618f.js"></script><br />To get PBE rate estimates for average public and private schools, take the exponential of the estimates to get the <a href="http://en.wikipedia.org/wiki/Odds_ratio">odds ratio</a> for the estimate. \begin{align*}<br />\mathrm{Intercept} &= e^{-3.113} = 0.044\\<br />\mathrm{public} &= e^{-0.606} = 0.546\\<br />\mathrm{private} &= e^{0} = 1.00<br />\end{align*}<br />The nice thing about odds ratios is that they simply multiply. What's our estimate for the odds ratio for the PBE rate of a public school? It's \(0.044\times 0.546 = 0.024\); for a private school it's \(0.044\times 1.00 = 0.044\). Translating to PBE rates we get \(\frac{0.024}{1+0.024} = 0.023\) for public schools and \(\frac{0.044}{1+0.044} = 0.042\) for private schools.<br /><br />Of course, the odds ratios for counties vary quite a bit, as do the odds ratios for cities within each county. You can see the \(\log({\mathrm{odds}})\) values for all of California's <a href="https://github.com/octonion/vaccinations/blob/master/counties.csv">counties here</a> and <a href="https://github.com/octonion/vaccinations/blob/master/cities.csv">cities here</a>.<br /><br />For another example, let's do a private school in Beverly Hills, which is in Los Angeles County. The odds ratio for Beverly Hills is \(3.875\); for Los Angeles County it's 0.625. Chaining as before, we get the overall estimate PBE odds ratio for a private school in Beverly Hills, Los Angeles County to be \(0.044\times 1.00\times 0.625\times 3.875 = 0.107\). Translated to an estimated PBE rate this is \(\frac{0.107}{1+0.107} = 0.096\). Thus, we'd expect about 10% of the children in a private Beverly Hills kindergarten to be unvaccinated due to personal belief exemptions. This is <a href="http://www.ovg.ox.ac.uk/herd-immunity">well below the desired herd immunity level for measles of 95%</a>.<br /><br />In the next part I'll show how to merge the information in California's vaccination data and poverty data to examine the role of other factors in the clustering of unvaccinated children. As an example, charter schools in California tend to have even higher rates of personal belief exemptions than private schools.http://angrystatistician.blogspot.com/2015/02/mere-measles-look-at-vaccination-rates.htmlnoreply@blogger.com (Christopher Long)1tag:blogger.com,1999:blog-9149402429183581490.post-8175766332155393834Sat, 07 Feb 2015 23:24:00 +00002015-02-07T15:24:11.769-08:00Touring Waldo; Overfitting Waldo; Scanning Waldo; Waldo, Waldo, WaldoRandal Olson has written a nice article on <a href="http://en.wikipedia.org/wiki/Where%27s_Wally%3F">finding Waldo</a> - <a href="http://www.randalolson.com/2015/02/03/heres-waldo-computing-the-optimal-search-strategy-for-finding-waldo/">Here’s Waldo: Computing the optimal search strategy for finding Waldo</a>. Randal presents a variety of machine learning methods to find very good search paths among the 68 known locations of Waldo. Of course, there's no need for an approximation; modern algorithms can optimize tiny problems like these exactly.<br /><br />One approach would be to treat this as a <a href="http://en.wikipedia.org/wiki/Travelling_salesman_problem">traveling salesman problem</a> with Euclidean distances as edge weights, but you'll need to add a dummy node that has edge weight 0 to every node. Once you have the optimal tour, delete the dummy node and you have your optimal <a href="http://en.wikipedia.org/wiki/Hamiltonian_path">Hamiltonian path</a>.<br /><br />I haven't coded in the dummy node yet, but here's the Waldo problem as a traveling salesman problem using <a href="http://comopt.ifi.uni-heidelberg.de/software/TSPLIB95/">TSPLIB format</a>.<br /><br /><script src="https://gist.github.com/octonion/7f0f70ab04805782bac1.js"></script><br />The <a href="http://www.math.uwaterloo.ca/tsp/concorde.html">Condorde software package</a> optimizes this in a fraction of a second:<br /><br /><script src="https://gist.github.com/octonion/95340a94ca840546ebdd.js"></script><br />I'll be updating this article to graphically show you the results for the optimal Hamiltonian path. There are also many additional questions I'll address. Do we really want to use this as our search path? We're obviously overfitting. Do we want to assume Waldo will <i>never</i> appear in a place he hasn't appeared before? When searching for Waldo we see an entire little area, not a point, so a realistic approach would be to develop a scanning algorithm that covers the entire image and accounts for our viewing point and posterior Waldo density. We can also jump where we're looking at from point to point quickly while not searching for Waldo, but scans are much slower.http://angrystatistician.blogspot.com/2015/02/touring-waldo-overfitting-waldo.htmlnoreply@blogger.com (Christopher Long)1tag:blogger.com,1999:blog-9149402429183581490.post-7021674754617646563Fri, 06 Feb 2015 21:05:00 +00002015-02-06T13:05:07.740-08:00Short Notes: Get CUDA and gputools Running on Ubuntu 14.10Here's a basic guide for getting CUDA 7.0 and the R package gputools running perfectly under Ubuntu 14.10. It's not difficult, but there are a few issues and this will be helpful to have in a single place.<br /><br />If you're running Ubuntu 14.10, I'd recommend installing CUDA 7.0. NVIDIA has a 7.0 Debian package specifically for 14.10; this wasn't the case for CUDA 6.5, which only had a Debian package for 14.04.<br /><br />To get access to CUDA 7.0, you'll first need to register as a CUDA developer.<br /><br /><a href="https://developer.nvidia.com/cuda-registered-developer-program">Join The CUDA Registered Developer Program</a><br /><br />Once you have access, navigate to the CUDA 7.0 download page and get the Debian package.<br /><br /><a href="https://developer.nvidia.com/rdp/cuda-70-rc-downloads">CUDA 7.0 Release Candidate Downloads</a><br /><br />You'll either need to be running the NVIDIA 340 or 346 drivers. If you're having trouble upgrading, I'd suggest adding the <a href="http://www.ubuntuupdates.org/ppa/xorg-edgers">xorg-edgers PPA</a>.<br /><br /><script src="https://gist.github.com/octonion/5239bd8c595033c567e1.js"></script>Once your NVIDIA driver is set, install the CUDA 7.0 Debian package you've downloaded. Don't forget to remove any previously installed CUDA packages or repositories.<br /><br /><script src="https://gist.github.com/octonion/018ac43ddb973d9846aa.js"></script>You'll need to add paths so everything knows where CUDA is installed. Append the following to the .bashrc in your home directory:<br /><br /><script src="https://gist.github.com/octonion/554912e766654444e63b.js"></script>Execute "source ~/.bashrc" for these changes to be applied. If you want to test your new CUDA install, make the samples provided by NVIDIA.<br /><br /><script src="https://gist.github.com/octonion/d9088cc3165df2201883.js"></script>I get the following output when running BlackScholes:<br /><br /><script src="https://gist.github.com/octonion/64c309b25f62c029dff7.js"></script>The next task is to install gputools for R. You can't unfortunately install the current package through R, as the source code contains references to CUDA architectures that are obsolete under CUDA 7.0. But that's easy to fix.<br /><br /><script src="https://gist.github.com/octonion/fb5dbac550983c018cb8.js"></script>Now do some editing in gputools/src/Makefile:<br /><br /><script src="https://gist.github.com/octonion/ed71024cfadb62b53b44.js"></script>Now build and install the patched gputools package while you're in the directory immediately above gputools:<br /><br /><script src="https://gist.github.com/octonion/f0028a3a6fb88dc2c9e3.js"></script>If you want to make the gputools packages available for all R users<br /><br /><script src="https://gist.github.com/octonion/e3953a0bbf1156e65cbf.js"></script>Keep in mind that they'll have to make the same environmental variable changes as above. Let's test it!<br /><br /><script src="https://gist.github.com/octonion/1aa3b572ebd7a6db68b8.js"></script>Running gives us:<br /><br /><script src="https://gist.github.com/octonion/902639d2db61d0f86dc1.js"></script>A nice 26-fold speedup. We're all set!http://angrystatistician.blogspot.com/2015/02/short-notes-get-cuda-and-gputools.htmlnoreply@blogger.com (Christopher Long)1tag:blogger.com,1999:blog-9149402429183581490.post-574142958622078177Thu, 22 Jan 2015 01:15:00 +00002015-01-21T17:24:21.668-08:00A Very Rough Guide to Getting Started in Data Science: Part I, MOOCs<h3>Introduction</h3><br />Data science is a very hot, perhaps the hottest, field right now. Sports analytics has been my primary area of interest, and it's a field that has seen amazing growth in the last decade. It's no surprise that the most common question I'm asked is about becoming a data scientist. This will be a first set of rough notes attempting to answer this question from my own personal perspective. Keep in mind that this is only my opinion and there are many different ways to do data science and become a data scientist.<br /><div><br /></div><div>Data science is using data to answer a question. This could be doing something as simple as making a plot by hand, or using Excel to take the average of a set of numbers. The important parts of this process are knowing which questions to ask, deciding what information you'd need to answer it, picking a method that takes this data and produces results relevant to your question and, most importantly, how to properly interpret these results so you can be confident that they actually answer your question.</div><div><br /></div><div>Knowing the questions requires some domain expertise, either yours or someone else's. Unless you're a data science researcher, data science is a tool you apply to another domain.</div><div><br /></div><div>If you have the data you feel should answer your question, you're in luck. Frequently you'll have to go out and collect the data yourself, e.g. scraping from the web. Even if you already have the data, it's common to have to process the data to remove bad data, correct errors and put it into a form better suited for analysis. A popular tool for this phase of data analysis is a scripting language; typically something like Python, Perl or Ruby. These are high-level programming languages that very good at web work as well as manipulating data.</div><div><br /></div><div>If you're dealing with a large amount of data, you'll find that it's convenient to store it in a structured way that makes it easier to access, manipulate and update in the future. This will typically be a relational database of some type, such as PostgreSQL, MySQL or SQL Server. These all use the programming language SQL.</div><div><br /></div><div>Methodology and interpretation are the most difficult, broadest and most important parts of data science. You'll see methodology referenced as statistical learning, machine learning, artificial intelligence and data mining; these can be covered in statistics, computer science, engineering or other classes. Interpretation is traditionally the domain of statistics, but this is always taught together with methodology.</div><div><br /></div><div>You can start learning much of this material freely and easily with MOOCs. Here's an initial list.<br /><br /></div><div><h3>MOOCs</h3></div><h4>Data Science Basics</h4><div><a href="https://www.coursera.org/course/datascitoolbox" target="_blank">Johns Hopkins: The Data Scientist’s Toolbox</a>. Overview of version control, markdown, git, GitHub, R, and RStudio. Started January 5, 2015. Coursera.</div><div><br /></div><div><a href="https://www.coursera.org/course/rprog" target="_blank">Johns Hopkins: R Programming</a>. R-based. Started January 5, 2015. Coursera.</div><div><br /></div><h4>Scripting Languages</h4><div><a href="https://www.udacity.com/course/cs101" target="_blank">Intro to Computer Science</a>. Python-based. Take anytime. Udacity; videos and exercises are free.</div><div><br /></div><div class="separator" style="clear: both; text-align: center;"><iframe class="youtube-player" type="text/html" width="640" height="385" src="http://www.youtube.com/embed/Pm_WAWZNbdA" allowfullscreen frameborder="0"></iframe></div><div><br /></div><div><a href="https://www.udacity.com/course/ud036" target="_blank">Programming Foundations with Python</a>. Python-based. Take anytime. Udacity; videos and exercises are free.</div><div><br /></div><div class="separator" style="clear: both; text-align: center;"><iframe class="youtube-player" type="text/html" width="640" height="385" src="http://www.youtube.com/embed/bwq2W6XUvyg" allowfullscreen frameborder="0"></iframe></div><div><br /></div><div><br /></div><div><a href="https://www.edx.org/course/introduction-computer-science-mitx-6-00-1x-0" target="_blank">MIT: Introduction to Computer Science and Programming Using Python</a>. Python-based. Class started January 9, 2015. edX.</div><div><br /></div><div class="separator" style="clear: both; text-align: center;"><iframe class="youtube-player" type="text/html" width="640" height="385" src="http://www.youtube.com/embed/ww2BdhILIio" allowfullscreen frameborder="0"></iframe></div><div><br /></div><div><br /></div><h4>Databases and SQL</h4><div><a href="https://www.coursera.org/course/db" target="_blank">Stanford: Introduction to Databases</a>. XML, JSON, SQL; uses SQLite for SQL. Self-paced. Coursera.</div><div><br /></div><div class="separator" style="clear: both; text-align: center;"><iframe class="youtube-player" type="text/html" width="640" height="385" src="http://www.youtube.com/embed/ShjrtAQmIVg" allowfullscreen frameborder="0"></iframe></div><div><br /></div><h4>Machine Learning</h4><div><a href="https://www.coursera.org/course/ml" target="_blank">Stanford: Machine Learning</a>. Octave-based. Class started January 19, 2015. Coursera.</div><div><br /></div><div class="separator" style="clear: both; text-align: center;"><iframe class="youtube-player" type="text/html" width="640" height="385" src="http://www.youtube.com/embed/qeHZOdmJvFU" allowfullscreen frameborder="0"></iframe></div><div><br /></div><div><a href="https://class.stanford.edu/courses/HumanitiesandScience/StatLearning/Winter2015/about" target="_blank">Stanford: Statistical Learning</a>. R-based. Class started January 19, 2015. Stanford OpenEdX.</div><div><br /></div><div class="separator" style="clear: both; text-align: center;"><iframe class="youtube-player" type="text/html" width="640" height="385" src="http://www.youtube.com/embed/St2-97n7atk" allowfullscreen frameborder="0"></iframe></div><div><br /></div><div><br /></div>http://angrystatistician.blogspot.com/2015/01/a-very-rough-guide-to-getting-started.htmlnoreply@blogger.com (Christopher Long)0tag:blogger.com,1999:blog-9149402429183581490.post-8851062544262278534Mon, 19 Jan 2015 06:59:00 +00002015-01-19T01:23:42.131-08:00How Unfair are the NFL's Overtime Rules?In 2010 the <a href="http://en.wikipedia.org/wiki/Overtime_%28sports%29#Major_American_professional_leagues" target="_blank">NFL amended its overtime rules</a>, and in 2012 extended these to all regular season games. Previously, overtime was handled by sudden death - the first team to score won. The team winning a coin flip can elect to kick or receive (they invariably receive, as they should).<br /><br />Assuming the game ends in the first overtime, the team with the first possession wins under the following scenarios:<br /><ol><li>scores a touchdown on the first drive</li><li>kicks a field goal on the first drive; other team fails to score on the second drive</li><li>both teams kick a field goal on the first and second drives; win in sudden death</li><li>doesn't score on the first drive; defensive score during second drive</li><li>neither team scores on first or second drives; win in sudden death</li></ol><div>Under this overtime procedure, roughly how often should be expect the team winning the coin flip to win the game?</div><div><br /></div><div>For an average team the empirical probabilities of the above events per drive are:</div><div><ul><li>\(\mathrm{defensiveTD} = \mathrm{Pr}(\text{defensive touchdown}) = 0.02\)</li><li>\(\mathrm{safety} = \mathrm{Pr}(\text{safety}) = 0.001\)</li><li>\(\mathrm{fieldGoal} = \mathrm{Pr}(\text{field goal}) = 0.118\)</li><li>\(\mathrm{offensiveTD} = \mathrm{Pr}(\text{offensive touchdown}) = 0.200\)</li></ul><div>We'll also use the following:</div><div><ul><li>\(\mathrm{defensiveScore} = \mathrm{Pr}(\text{any defensive score}) = \mathrm{defensiveTD} + \mathrm{safety}\)</li><li>\(\mathrm{offensiveScore} = \mathrm{Pr}(\text{any offensive score}) = \mathrm{fieldGoal} + \mathrm{offensiveTD}\)</li><li>\(\mathrm{noOFscore} = \mathrm{Pr}(\text{no offensive score}) = 1 - \mathrm{offensiveScore}\)</li><li>\(\mathrm{noScore} = \mathrm{Pr}(\text{no score}) = 1 - \mathrm{offensiveScore} - \mathrm{defensiveScore}\)</li><li>\(\mathrm{sdWin} = \mathrm{Pr}(\text{driving team winning under sudden death rules})\)</li></ul></div><div>Then the probabilities of the above numbered outcomes is approximately:</div><div><ol><li>\(\mathrm{offensiveTD}\)</li><li>\(\mathrm{fieldGoal}\times \mathrm{noOFscore}\)</li><li>\(\mathrm{fieldGoal}\times \mathrm{fieldGoal}\times \mathrm{sdWin}\)</li><li>\(\mathrm{noScore}\times \mathrm{defensiveScore}\)</li><li>\(\mathrm{noScore}\times \mathrm{noScore}\times \mathrm{sdWin}\)</li></ol><div>The last thing we need to work out is \(\text{sdWin}\). There are three ways for the team with possession to win:</div></div></div><div><ol><li>any offensive score on the first drive</li><li>no offensive score; any defensive score on the second drive</li><li>neither team scores on the first or second possessions; we're back to our original state</li></ol><div>These three scenarios have values:</div><ol><li>\(\mathrm{offensiveScore}\)</li><li>\(\mathrm{noOFscore}\times \mathrm{defensiveScore}\)</li><li>\(\mathrm{noScore}\times \mathrm{noScore}\times \mathrm{sdWin}\)</li></ol><div>Doing the math, we get that \begin{align*}<br />\mathrm{sdWin} &= \mathrm{offensiveScore} + \mathrm{noOFscore}\times \mathrm{defensiveScore} + {\mathrm{noScore}}^2\times\mathrm{sdWin};\\<br />\mathrm{sdWin} &=\frac{(\mathrm{offensiveScore} + \mathrm{noOFscore}\times \mathrm{defensiveScore})}{(1-{\mathrm{noScore}}^2)}.<br />\end{align*}<br />Putting it all together we get \[<br />\text{win} = \mathrm{offensiveTD} + \mathrm{fieldGoal}\times \mathrm{noOFscore} + \mathrm{noScore}\times \mathrm{defensiveScore}\\<br />+ (\mathrm{fieldGoal}^2+ \mathrm{noScore}^2)\times \mathrm{sdWin}.\]<br />Plugging in our empirical values, we finally arrive at \[\mathrm{Pr}(\text{win coin flip, win game}) = 0.560.\] For comparison, under the original sudden death rules, \[\mathrm{Pr}(\text{win coin flip, win game}) = 0.589.\] So the NFL overtime rules are still ridiculously unfair in favor of the winner of the coin flip, but not as ridiculously unfair as they were under the original sudden death rules.<br /><br />How do these numerical results compare to actual outcomes? Under the current overtime rules, there have been 51 overtime games. In 27 of these the team winning the coin toss won the game, in 21 the team losing the coin toss won the game and there have been 3 ties. That puts \(\mathrm{win} = \frac{27}{48} = 0.5625\) for games not ending in ties. Close enough!<br /><br />If you'd like to tweak the probabilities for each event to see how the resulting probability for the winner of the coin flip changes, I have a <a href="https://github.com/octonion/football/tree/master/overtime" target="_blank">simple Python script here</a>.</div></div>http://angrystatistician.blogspot.com/2015/01/how-unfair-is-nfls-current-overtime.htmlnoreply@blogger.com (Christopher Long)0tag:blogger.com,1999:blog-9149402429183581490.post-4012430552273245039Mon, 12 Jan 2015 03:10:00 +00002015-01-12T14:36:16.167-08:00A Short Note on Automatic Algorithm Optimization via Fast Matrix Exponentiation<a href="https://github.com/borzunov" target="_blank">Alexander Borzunov</a> has written an <a href="http://kukuruku.co/hub/algorithms/automatic-algorithms-optimization-via-fast-matrix-exponentiation" target="_blank">interesting article</a> about his <a href="https://github.com/borzunov/cpmoptimize" target="_blank">Python code</a> that uses fast matrix exponentiation to automatically optimize certain algorithms. It's definitely a recommended read.<br /><br />In his article, Alexander mentions that it's difficult to directly derive a matrix exponentiation algorithm for recursively-defined sequences such as \[<br />F_n = \begin{cases}<br />0, & n = 0\\<br />1, & n = 1\\<br />1, & n = 2\\<br />7(2F_{n-1} + 3F_{n-2})+4F_{n-3}+5n+6, & n \geq 3<br />\end{cases}<br />\] While it's true that it's not entirely simple, there is a relatively straightforward way to do this that's worth knowing.<br /><br />The only difficultly is due to the term \(5n+6\), but we can eliminate it by setting<br />\(F_n = G_n + an+b\), then solving for appropriate values of \(a, b\).<br /><br />Substituting and grouping terms we have \[<br />G_n + an+b = 7(2G_{n-1} + 3G_{n-2})+4G_{n-3} + 39an-68a+39b+5n+6,<br />\] and equating powers of \(n\) we need to solve the equations \[<br />\begin{align*}<br />a &= 39a+5,\\<br />b &= -68a+39b+6.<br />\end{align*}<br />\] Setting \(a = -\frac{5}{38}, b = -\frac{142}{361}\) we end up with the homogeneous system \[<br />G_n = \begin{cases}<br />\frac{142}{361}, & n = 0\\<br />\frac{379}{722}, & n = 1\\<br />\frac{237}{361}, & n = 2\\<br />7(2G_{n-1} + 3G_{n-2})+4G_{n-3}, & n \geq 3<br />\end{cases}<br />\] This is now easily cast into matrix exponentiation form using the <a href="http://en.wikipedia.org/wiki/Fibonacci_number#Matrix_form" target="_blank">standard method</a>.http://angrystatistician.blogspot.com/2015/01/a-short-note-on-automatic-algorithm.htmlnoreply@blogger.com (Christopher Long)0tag:blogger.com,1999:blog-9149402429183581490.post-6682721504411884692Wed, 07 Jan 2015 19:08:00 +00002015-01-09T00:31:22.861-08:00Young Alan Turing and the ArctangentWith the release of the new film <a href="http://www.rottentomatoes.com/m/the_imitation_game/" target="_blank">"The Imitation Game"</a>, I decided to read the biography this excellent film was based on - <a href="http://www.amazon.com/gp/product/B00E3URHEK/ref=as_li_tl?ie=UTF8&camp=1789&creative=390957&creativeASIN=B00E3URHEK&linkCode=as2&tag=octonion-20&linkId=YRGFVTE5D6727REE" target="_blank">Alan M. Turing: Centenary Edition</a>. In it, the author Andrew Hodges relates the story that the 15-year-old Alan Turing derived the <a href="http://mathworld.wolfram.com/MaclaurinSeries.html" target="_blank">Maclaurin series</a> for the \(\arctan\) function, i.e. \[\arctan(x) = x - \frac{x^3}{3} + \frac{x^5}{5} - \frac{x^7}{7} + \ldots\] This is trivial using calculus, but it's explicitly stated that young Alan Turing neither knew nor used calculus. How would you derived such a series without calculus?<br /><br />This is a tricky problem, and I'd suggest first tackling the much easier problem of deriving the Maclaurin series for \(\exp(x)\) from the relation \( \exp(2x) = \exp(x)\cdot \exp(x)\). This is an undercontrained relation, so you'll need to assume \(c_0 = 1, c_1 = 1\).<br /><br />Getting back to \(\arctan\), you could start with the <a href="http://en.wikipedia.org/wiki/Tangent_half-angle_formula" target="_blank">half-angle formula for the tangent</a>: \[\tan(2x) = \frac{2\tan(x)}{1-{\tan}^2(x)}.\] Now use the <a href="http://weierstrass-like%20substitution/" target="_blank">Weierstrass-like substitution</a> \(x = \arctan(t)\) to get \[\tan(2\arctan(t)) = \frac{2t}{1-t^2}.\] The right-hand side can be expanded in the usual geometric series fashion to get \[\tan(2\arctan(t)) = 2t\cdot (1+t^2+t^4+\ldots).\]<br />Finally, take the \(\arctan\) of both sides and assume we have the series expansion \(\arctan(x) = c_1 x + c_3 x^3 + c_5 x^5 + \ldots\). Note that we may ignore the terms with even powers of \(x\) as \(\arctan(x)\) is an <a href="http://en.wikipedia.org/wiki/Even_and_odd_functions#Odd_functions" target="_blank">odd function</a>.<br /><br />This gives us the setting \[2\arctan(t) = \arctan(2t\cdot (1+t^2+t^4+\ldots))\] and expanding as a power series \[2(c_1 t + c_3 t^3 + \ldots) = c_1 (2t\cdot (1+t^2+\ldots) + c_3 (2t\cdot (1+t^2+\ldots))^3 + \ldots\]<br />The next step is to line up powers of \(t\) on both sides and set up a system of simultaneous equations. There's some algebra and combinatorics involved, but we end up with the system of equations \[c_{2i+1} = \sum_{j=0}^{i} c_{2j+1}\cdot 2^{2j} \cdot {{i+j} \choose {2j}}.\] Note that this system is underconstrained due to our functional relationship being satisfied by any multiple of the \(\arctan\) function. We'll assume that \(c_1 = 1\), but note that this follows from the <a href="https://math.stackexchange.com/questions/75130/how-to-prove-that-lim-limits-x-to0-frac-sin-xx-1/75151#75151" target="_blank">classical (non-calculus) limit</a> \( \lim_{x\to 0} \frac{\sin(x)}{x} = 1\).<br /><br />The first few relations are \begin{align*}<br />c_1 &= c_1 \\<br />c_3 &= c_1 + 4\cdot c_3 \\<br />c_5 &= c_1 + 12\cdot c_3 + 16\cdot c_5 \\<br />c_7 &= c_1 + 24\cdot c_3 + 80\cdot c_5 + 64\cdot c_7<br />\end{align*}<br />Assuming \(c_1 = 1\) as above we quickly calculate \( c_3 = -\frac{1}{3}, c_5 = \frac{1}{5}, c_7 = -\frac{1}{7}\), with the pattern being obvious.<br /><br />That \(c_{2i+1} = \frac{(-1)^i}{2i+1}\) can be verified by Wolfram Alpha:<br /><br /><div class="separator" style="clear: both; text-align: center;"><a href="http://www.wolframalpha.com/input/?i=%5Csum_%7Bj%3D0%7D%5E%7Bi%7D+%28-1%29%5Ej%2F%282j%2B1%29+2%5E%7B2j%7D+%7B%7Bi%2Bj%7D+choose+%7B2j%7D%7D" target="_blank"><img alt="Wolfram Alpha" border="0" src="http://4.bp.blogspot.com/-R3rBbUh6_V4/VK4IDbKpB9I/AAAAAAAABmQ/TW0iRh25w0k/s1600/img.gif" title="" /></a></div><br />An obvious question is whether or not there's a simple demonstration of this; in particular, one that a young Alan Turing may have found. This I don't know (yet).http://angrystatistician.blogspot.com/2015/01/young-alan-turing.htmlnoreply@blogger.com (Christopher Long)1tag:blogger.com,1999:blog-9149402429183581490.post-7402880594498963226Sun, 12 Oct 2014 03:38:00 +00002014-10-11T20:41:45.665-07:00A Strange Recursive Relation, AutomaticHofstadter mentions the following recursive relation in his great book "Gödel, Escher, Bach": \[<br />\begin{align}<br />g(0) &= 0;\\<br />g(n) &= n-g(g(n-1)).<br />\end{align}<br />\] I claim that \( g(n) = \left\lfloor \phi\cdot (n+1) \right\rfloor \), where \( \phi = \frac{-1+\sqrt{5}}{2}\), and I'll show this using a technique that makes proving many identities of this type nearly automatic.<br /><br />Let \( \phi\cdot n = \left\lfloor \phi\cdot n \right\rfloor + e\), where \( 0 < e < 1\) as \( \phi \) is irrational, nor can \(e = 1-\phi\), and note that \(\phi\) satisfies \( {\phi}^2 + \phi - 1 = 0\). Some algebra gives \[<br />\begin{align}<br />n-\left\lfloor \left( \left\lfloor \phi\cdot n \right\rfloor + 1 \right) \cdot \phi \right\rfloor<br />&= n-\left\lfloor \left( n\cdot \phi - e + 1 \right) \cdot \phi \right\rfloor \\<br />&= n-\left\lfloor n\cdot {\phi}^2 - e\cdot \phi + \phi \right\rfloor \\<br />&= n-\left\lfloor n\cdot \left(1-\phi\right) - e\cdot \phi + \phi \right\rfloor \\<br />&= n-n-\left\lfloor -n\cdot \phi - e\cdot \phi + \phi \right\rfloor \\<br />&= -\left\lfloor -n\cdot \phi - e\cdot \phi + \phi \right\rfloor \\<br />&= -\left\lfloor -n \cdot \phi + e - e - e\cdot \phi + \phi \right\rfloor \\<br />&= \left\lfloor \phi\cdot n \right\rfloor -\left\lfloor - e - e\cdot \phi + \phi \right\rfloor.<br />\end{align}<br />\]<br />Now if \[<br />\begin{align}<br />0 < e < 1-\phi &\implies 0 < - e - e\cdot \phi + \phi < \phi;\\<br />1-\phi < e < 1 &\implies -1 < - e - e\cdot \phi + \phi < 0.<br />\end{align}<br />\]<br />This implies \[ n-\left\lfloor \left( \left\lfloor \phi\cdot n \right\rfloor + 1 \right) \cdot \phi \right\rfloor = \left\lfloor \phi\cdot (n+1) \right\rfloor .\] Since \( \left\lfloor \phi\cdot (0+1) \right\rfloor = 0\), we're done.<br /><br />The point of the algebra was to move all terms involving \(n\) out, and then checking to see how the remaining term varied with \( e\). A simple idea, but very useful.http://angrystatistician.blogspot.com/2014/10/a-strange-recursive-relation-automatic.htmlnoreply@blogger.com (Christopher Long)1tag:blogger.com,1999:blog-9149402429183581490.post-562657659715110374Sun, 05 Oct 2014 19:26:00 +00002014-10-05T21:01:29.267-07:00Closed Under MeansHere's a nice little problem from the 13th All Soviet Union Mathematics Olympiad.<br /><blockquote class="tr_bq">Given a set of real numbers \(S\) containing 0 and 1 that's closed under finite means, show that it contains all rational numbers in the interval \(\left[0,1\right]\).</blockquote>This isn't a difficult problem, but there's a particularly nice solution.<br /><br />First observe that if \(x\in S\) then both \(\frac{x}{4}\) and \(\frac{3x}{4}\) are in \(S\); average \(\{0,x\}\) to get \(\frac{x}{2}\), average \(\{0, \frac{x}{2}\}\) to get \(\frac{x}{4}\), average \(\{\frac{x}{2}, x\}\) to get \(\frac{3x}{4}\).<br /><br />We could show any rational number \(\frac{m}{n}\) with \(1\leq m < n\) is in \(S\) if we had \(n\) distinct elements from \(S\) that summed to \(m\). Let's exhibit one.<br /><br />Start with an array with \(m\) 1s on the left, \(n-m\) 0s on the right. Repeatedly replace adjacent \(x,y\) values with \(\frac{3(x+y)}{4}, \frac{(x+y)}{4}\), where either \(x=1,y\neq1\) or \(x\neq 0, y=0\), until there is one 0 and one 1 left. We can do this in exactly \(n-2\) steps, each resulting number is guaranteed to be in \(S\) by the above note, and each number is guaranteed to be distinct by construction!<br /><br />Examples:<br /><br />\(\frac{1}{3}: \left[1,0,0\right] \to \left[\frac{3}{4},\frac{1}{4},0\right] \)<br /><br />\(\frac{2}{5}: \left[1,1,0,0,0\right] \to \left[1,\frac{3}{4},\frac{1}{4},0,0\right] \to \left[1,\frac{3}{4},\frac{3}{16},\frac{1}{16},0\right] \)http://angrystatistician.blogspot.com/2014/10/closed-under-means.htmlnoreply@blogger.com (Christopher Long)0tag:blogger.com,1999:blog-9149402429183581490.post-8609248843020646690Fri, 29 Aug 2014 04:28:00 +00002014-08-30T01:15:37.813-07:00Learn One Weird Trick And Easily Solve The Product-Sum ProblemA tribute to <a href="https://en.wikipedia.org/wiki/Martin_Gardner" target="_blank"><b>Martin Gardner</b></a>.<br /><br />For which sets of positive integers does the sum equal the product? For example, when does \( x_1 + x_2 = x_1\cdot x_2\)? It's easy to see that this is only true when \(x_1 = x_2 = 2\).<br /><br />In the general case our equality is \(\sum_{i=1}^{n} x_i= \prod_{i=1}^{n} x_i \). We can rearrange terms to give \[x_1+x_2+\sum_{i=3}^{n} x_i= x_1\cdot x_2\cdot \prod_{i=3}^{n} x_i,\] and this in turn factors nicely to give us \[\left( x_1\cdot \prod_{x=3}^{n} x_i - 1\right)\cdot \left( x_2\cdot \prod_{x=3}^{n} x_i - 1\right) = \left( \prod_{x=3}^{n} x_i \right)\cdot \left(\sum_{x=3}^{n} x_i \right) + 1.\] How does this help? Consider the case \(n=5\), and without loss of generality assume \(x_i \ge x_{i+1}\) for all \(i\). If \(x_3=x_4=x_5=1\) our factorized equation becomes \[(x_1-1)\cdot(x_2-1)=4,\] with the obvious solutions \( x_1=5, x_2=2; x_1=3, x_2=3\). The only remaining case to consider is \(x_3=2\), as any other case forces \( \sum_{i=1}^{n} x_i < \prod_{i=1}^{n} x_i \). For this case our equality becomes \[\left( 2 x_1 - 1\right)\cdot \left( 2 x_2 - 1\right) = 9.\] This gives us the remaining solution \(x_1 = x_2 = x_3 = 2\) with the other \(x_i = 1\).<br /><br />This is quite efficient. For example, for the case \(n=100\) the only equations we need to consider are \begin{aligned} (x_1-1)\cdot(x_2-1) &= 99\\<br /> (2 x_1-1)\cdot(2 x_2-1) &= 199\\<br /> (3 x_1-1)\cdot(3 x_2-1) &= 301\\<br /> (4 x_1-1)\cdot(4 x_2-1) &= 405\\<br /> (4 x_1-1)\cdot(4 x_2-1) &= 401\\<br /> (6 x_1-1)\cdot(6 x_2-1) &= 607\\<br /> (9 x_1-1)\cdot(9 x_2-1) &= 919\\<br /> (8 x_1-1)\cdot(8 x_2-1) &= 809\\<br /> (12 x_1-1)\cdot(12 x_2-1) &= 1225\\<br /> (16 x_1-1)\cdot(16 x_2-1) &= 1633.<br />\end{aligned} Now 199, 401, 607, 919, 809 are all prime ruling them out immediately, and 301 and 1633 don't have factors of the right form. The other equations yield the five solutions \( (100,2), (34,4), (12,10), (7,4,4), (3,3,2,2,2) \) with the other \(x_i = 1\).<br /><br />For \(n = 1000 \) you'd need to consider 52 cases, but most of these are eliminated immediately. I get the six solutions \( (1000,2), (334,4), (112,10), (38,28), (67,4,4), (16,4,4,4) \).<br /><br />Have fun!<br /><br />http://angrystatistician.blogspot.com/2014/08/learn-this-one-weird-trick-and-easily.htmlnoreply@blogger.com (Christopher Long)0tag:blogger.com,1999:blog-9149402429183581490.post-5713515223902536661Wed, 09 Jul 2014 03:01:00 +00002014-07-08T20:30:01.826-07:00Finding the Best Book Quotes: Power Ranking GoodreadsMy friend Jordan Ellenberg <a href="http://online.wsj.com/articles/the-summers-most-unread-book-is-1404417569" target="_blank">wrote an article for the Wall Street Journal</a> recently describing a metric to roughly measure the least read books, which he calls the Hawking Index (HI). As he describes it, take the page numbers of a book's five top highlights, average them, and divide by the number of pages in the whole book.<br /><br />On a discussion thread on Facebook, this led to a proposal from me for measuring the general quality of a quote. Assume a user has some level of discrimination \(D\); the higher the value of \(D\), the more likely they are to quote a passage. Now assume each passage has some measure of quality \(Q\); the higher the value of \(Q\), the more likely a passage is to be quoted. Let's try a classic <a href="http://en.wikipedia.org/wiki/Pairwise_comparison" target="_blank">Bradley-Terry-Luce</a> model - if a user with discrimination \(D\) quotes at least one passage from a particular work, the probability \(p\) that they'll quote a given passage with quality \(Q\) from that same work is roughly \[ p = \frac{Q}{Q+D}.\] <br />Nice, but where can we find data to actually test this theory? It turns out an excellent source is <a href="https://www.goodreads.com/" target="_blank">Goodreads</a>. Users are numbered sequentially starting with Goodreads founder <a href="https://www.goodreads.com/user/show/1" target="_blank">Otis Chandler's ID of 1</a>; a user's <a href="https://www.goodreads.com/quotes/list/1" target="_blank">associated quote page</a> has a simple URL structure. Furthermore, every user's quote list has an <a href="https://www.goodreads.com/quotes/list_rss/1-otis-chandler" target="_blank">associated RSS/Atom URL</a> that provides an easy to parse XML file.<br /><br />The steps:<br /><ol><li>Write a simple Go scraper to uses a goroutine to quickly and sequentially get user quote pages, grab the count of quotes shown on that page (truncated at 30), then grab the RSS/Atom URL. This is able to process about 2500/users minutes over a slow internet connection.</li><li>Write a simple Ruby program to fetch quote XML files for every user that has at least one quote.</li><li>Write a simple Ruby program to fetch the author URL and work URL for every quote cited by at least one user.</li><li>Load all this data into a PostgreSQL database.</li><li>Write a simple R program to pull the data from the database and fit our model. We assign a value for the outcome "quoted" of 1 if the user quoted a passage; we assign "quoted" the value 0 if the user quote a passage from that work, but not that particular passage. Both user "discrimination" and quote "quality" are treated as factors and modeled as random effects; the R package "lmer" was used to fit the model.</li></ol><div>If there is interest, I can make all of this code publicly available through my <a href="https://github.com/octonion" target="_blank">GitHub account</a>. None of it is proprietary.</div><div><br /></div><div>As a test I grabbed the data from 333760 users, 10724 had at least one quote, there were 83160 total quotes and 32621 unique quotes. The model took about 3.5 minutes to fit using an optimized BLAS library and 4 hyperthreaded CPU cores.</div><div><br /></div><div>The top 10 quotes on Goodreads, ranked by quality:<br /><ol><li><a href="https://www.goodreads.com/quotes/11035-no-one-can-make-you-feel-inferior-without-your-consent" target="_blank">“No one can make you feel inferior without your consent.”</a><br /><b>Eleanor Roosevelt</b>, <i>This is My Story</i></li><li><a href="https://www.goodreads.com/quotes/352-outside-of-a-dog-a-book-is-man-s-best-friend" target="_blank">“Outside of a dog, a book is man's best friend. Inside of a dog it's too dark to read.”</a><br /><b>Groucho Marx</b>, <i>The Essential Groucho: Writings For By And About Groucho Marx</i></li><li><a href="https://www.goodreads.com/quotes/2340-twenty-years-from-now-you-will-be-more-disappointed-by" target="_blank">“Twenty years from now you will be more disappointed by the things that you didn't do than by the ones you did do. So throw off the bowlines. Sail away from the safe harbor. Catch the trade winds in your sails. Explore. Dream. Discover.”</a><br /><b>H. Jackson Brown Jr.</b>, <i>P.S. I Love You</i></li><li><a href="https://www.goodreads.com/quotes/1198-i-am-so-clever-that-sometimes-i-don-t-understand-a" target="_blank">“I am so clever that sometimes I don't understand a single word of what I am saying.”</a><br /><b>Oscar Wilde</b>, <i>The Happy Prince and Other Stories</i></li><li><a href="https://www.goodreads.com/quotes/5543-insanity-is-doing-the-same-thing-over-and-over-again" target="_blank">“Insanity is doing the same thing, over and over again, but expecting different results.”</a><br /><b>Narcotics Anonymous</b>, <i>Narcotics Anonymous</i></li><li><a href="https://www.goodreads.com/quotes/2644-he-has-achieved-success-who-has-lived-well-laughed-often" target="_blank">“He has achieved success who has lived well, laughed often, and loved much;Who has enjoyed the trust of pure women, the respect of intelligent men and the love of little children;Who has filled his niche and accomplished his task;Who has never lacked appreciation of Earth's beauty or failed to express it;Who has left the world better than he found it,Whether an improved poppy, a perfect poem, or a rescued soul;Who has always looked for the best in others and given them the best he had;Whose life was an inspiration;Whose memory a benediction.”</a><br /><b>Bessie Anderson Stanley</b>, <i>More Heart Throbs Volume Two in Prose and Verse Dear to the American People And by them contributed as a Supplement to the original $10,000 Prize Book HEART THROBS</i></li><li><a href="https://www.goodreads.com/quotes/2627-the-trouble-with-having-an-open-mind-of-course-is" target="_blank">“The trouble with having an open mind, of course, is that people will insist on coming along and trying to put things in it.”</a><br /><b>Terry Pratchett</b>, <i>Diggers</i></li><li><a href="https://www.goodreads.com/quotes/2621-love-all-trust-a-few-do-wrong-to-none" target="_blank">“Love all, trust a few, do wrong to none.”</a><br /><b>William Shakespeare</b>, <i>All's Well That Ends Well</i></li><li><a href="https://www.goodreads.com/quotes/6396-the-paradoxical-commandments-people-are-illogical-unreasonable-and-self-centered-love" target="_blank"><span style="background-color: white; color: #181818; font-family: georgia, serif; font-size: 14px; line-height: 18px;">“The Paradoxical Commandments</span><span style="background-color: white; color: #181818; font-family: georgia, serif; font-size: 14px; line-height: 18px;"><br />People are illogical, unreasonable, and self-centered.</span><span style="background-color: white; color: #181818; font-family: georgia, serif; font-size: 14px; line-height: 18px;">Love them anyway.</span><span style="background-color: white; color: #181818; font-family: georgia, serif; font-size: 14px; line-height: 18px;"><br />If you do good, people will accuse you of selfish ulterior motives.</span><span style="background-color: white; color: #181818; font-family: georgia, serif; font-size: 14px; line-height: 18px;">Do good anyway.</span><span style="background-color: white; color: #181818; font-family: georgia, serif; font-size: 14px; line-height: 18px;"><br />If you are successful, you will win false friends and true enemies.</span><span style="background-color: white; color: #181818; font-family: georgia, serif; font-size: 14px; line-height: 18px;">Succeed anyway.</span><span style="background-color: white; color: #181818; font-family: georgia, serif; font-size: 14px; line-height: 18px;"><br />The good you do today will be forgotten tomorrow.</span><span style="background-color: white; color: #181818; font-family: georgia, serif; font-size: 14px; line-height: 18px;">Do good anyway.</span><span style="background-color: white; color: #181818; font-family: georgia, serif; font-size: 14px; line-height: 18px;"><br />Honesty and frankness make you vulnerable.</span><span style="background-color: white; color: #181818; font-family: georgia, serif; font-size: 14px; line-height: 18px;">Be honest and frank anyway.</span><span style="background-color: white; color: #181818; font-family: georgia, serif; font-size: 14px; line-height: 18px;"><br />The biggest men and women with the biggest ideas can be shot down by the smallest men and women with the smallest minds.</span><span style="background-color: white; color: #181818; font-family: georgia, serif; font-size: 14px; line-height: 18px;">Think big anyway.</span><span style="background-color: white; color: #181818; font-family: georgia, serif; font-size: 14px; line-height: 18px;"><br />People favor underdogs but follow only top dogs.</span><span style="background-color: white; color: #181818; font-family: georgia, serif; font-size: 14px; line-height: 18px;">Fight for a few underdogs anyway.</span><span style="background-color: white; color: #181818; font-family: georgia, serif; font-size: 14px; line-height: 18px;"><br />What you spend years building may be destroyed overnight.</span><span style="background-color: white; color: #181818; font-family: georgia, serif; font-size: 14px; line-height: 18px;">Build anyway.</span><span style="background-color: white; color: #181818; font-family: georgia, serif; font-size: 14px; line-height: 18px;"><br />People really need help but may attack you if you do help them.</span><span style="background-color: white; color: #181818; font-family: georgia, serif; font-size: 14px; line-height: 18px;">Help people anyway.</span><span style="background-color: white; color: #181818; font-family: georgia, serif; font-size: 14px; line-height: 18px;"><br />Give the world the best you have and you'll get kicked in the teeth.</span></a><span style="background-color: white;"><span style="color: #181818; font-family: georgia, serif;"><span style="font-size: 14px; line-height: 18px;"><a href="https://www.goodreads.com/quotes/6396-the-paradoxical-commandments-people-are-illogical-unreasonable-and-self-centered-love" target="_blank">Give the world the best you have anyway.”</a></span></span><br /><span style="color: #181818; font-family: georgia, serif;"><span style="font-size: 14px; line-height: 18px;"><b>Kent M. Keith</b>, <i>The Silent Revolution: Dynamic Leadership in the Student Council</i></span></span></span></li><li><span style="background-color: white;"><span style="color: #181818; font-family: georgia, serif;"><span style="font-size: 14px; line-height: 18px;"><a href="https://www.goodreads.com/quotes/2795-she-is-too-fond-of-books-and-it-has-turned" target="_blank">“She is too fond of books, and it has turned her brain.”</a></span><br /><span style="font-size: 14px; line-height: 18px;"><b>Louisa May Alcott</b>, <i>Work: A Story of Experience</i></span></span></span></li></ol></div>http://angrystatistician.blogspot.com/2014/07/finding-best-book-quotes-power-ranking.htmlnoreply@blogger.com (Christopher Long)8tag:blogger.com,1999:blog-9149402429183581490.post-5487884645169400777Tue, 08 Apr 2014 03:04:00 +00002014-04-09T23:39:54.078-07:00Five Free Student Tickets for the SaberSeminar in Boston (August 17-18, 2014)<a href="https://twitter.com/Bbl_Astrophyscs" target="_blank">Meredith Wills</a>, <a href="https://twitter.com/injuryexpert" target="_blank">Will Carroll</a> and <a href="https://twitter.com/octonion" target="_blank">myself</a> are donating four student two-day tickets, including lunch, for the upcoming baseball analytics <a href="http://saberseminar.com/" target="_blank">Saberseminar</a> run by <a href="https://twitter.com/brooksbaseball" target="_blank">Dan Brooks</a>. This is a wonderful event, and 100% of the proceeds are donated to the <a href="https://www.jimmyfund.org/" target="_blank">Jimmy Fund</a>. You <b>must</b> be a current student. Meredith and myself will by choosing four students by the end of this week, Sunday April 13, 2014.<br /><br />Please note:<br /><ul><li>These tickets are for both days, August 17-18, 2014</li><li>The event is in Boston, MA</li><li>Lunch is included, but <b>no</b> other meals</li><li>Transportation and lodging are <b>not</b> included</li></ul><div><br />If you would like to be considered for a donated ticket, please send:</div><div><ul><li>Your full name (first and last)</li><li>If you're outside of the Boston area, how will you be getting to the event?</li><li>Your school affiliation and whether high school or college</li><li>Best contact email address (if different from reply-to address)</li><li>A little about your baseball interests, analytical or otherwise</li><li>Do you see yourself working in baseball? For a team, as a journalist, or something else?</li></ul><div><br />Please email the above information to me at <a href="mailto:sabermetrics@gmail.com" target="_blank">sabermetrics@gmail.com</a>.</div><div><br /></div><div>Again, please do so by the end of the day on Sunday, April 13, 2014. Once the tickets are awarded they're gone.</div></div>http://angrystatistician.blogspot.com/2014/04/four-free-student-tickets-for.htmlnoreply@blogger.com (Christopher Long)2tag:blogger.com,1999:blog-9149402429183581490.post-3718907100550685167Mon, 30 Dec 2013 17:51:00 +00002013-12-30T10:30:04.425-08:00A Stupid and Strange Way of Looking at Sports Power Ratings that could be Smart and UsefulAs I've <a href="http://angrystatistician.blogspot.com/2013/03/baseball-chess-psychology-and.html"><b>mentioned previously</b></a>, a common method used in sports for estimating game outcomes known as <a href="https://en.wikipedia.org/wiki/Log5"><b>log5</b></a> can be written \[p = \frac{p_1 q_2}{p_1 q_2+q_1 p_2}\] where \(p_i\) is the fraction of games won by team \(i\) and \(q_i\) is the fraction of games lost by team \(i\). We're assuming that there are no ties. What's the easiest way to derive this estimate? Here's one argument. Assume team \(i\) has a probability \(p_i\) of beating an average team (a team that wins half its games). Now imagine that this means for any given game the team has some <b>"strength"</b> sampled from [0,1] with median \(p_i\) and that the stronger team always wins. Thus, the probability that team 1 beats team 2 is \[ p = \int_0^1 \int_0^1 \! \mathrm{Pr}(p_1 > p_2) \, \mathrm{d} p_1 \mathrm{d} p_2 .\] This looks complicated, but but with probability \(p_1\) team 1 is stronger than an average team and with probability \(p_2\) team 2 is stronger than an average team. From this perspective the log5 estimate is just the <a href="https://en.wikipedia.org/wiki/Bayes'_theorem" target="_blank"><b>Bayesian probability</b></a> that team 1 will be stronger than an average team while team 2 will be weaker than an average team, conditional on either team 1 being stronger than an average team and team 2 weaker than an average team, or team 1 weaker than an average team and team 2 stronger than an average team. In these cases it's unambiguous which team is stronger. The cases where the strength of both teams is stronger or weaker than an average team (the ambiguous cases) are thus discarded.<br /><br />How could this be useful? Instead of ignoring the ambiguous outcomes when estimating the outcome probabilities under this <b>"latent strength"</b> model, we could instead determine which probability distributions best fit the outcome distributions for a given league! Furthermore, this allows us to cohesively put a power rating system into a Bayesian framework by assigning to each team a <a href="https://en.wikipedia.org/wiki/Prior_probability" target="_blank"><b>Bayesian prior</b></a> strength distribution. These priors could either be <a href="https://en.wikipedia.org/wiki/Prior_probability#Uninformative_priors" target="_blank"><b>uninformative</b></a> or <a href="https://en.wikipedia.org/wiki/Prior_probability#Informative_priors" target="_blank"><b>informative</b></a> using e.g. preseason rankings.http://angrystatistician.blogspot.com/2013/12/a-stupid-and-strange-way-of-looking-at.htmlnoreply@blogger.com (Christopher Long)2tag:blogger.com,1999:blog-9149402429183581490.post-6620747293907708484Thu, 07 Nov 2013 23:28:00 +00002013-11-07T16:01:12.861-08:00Building a Personal SupercomputerIt's time for a workstation upgrade; here's what I've assembled.<br /><br />The massive case has plenty of space for additional drives to store the extreme amount of data that nearly every sport is now collecting together with the associated video footage. The case, power supply and motherboard allow up to 3 additional video cards to use as GPU units as your analytical needs demand (and budget can handle).<br /><ol><li><a href="http://www.amazon.com/gp/product/B00BCXF4JQ/ref=as_li_ss_tl?ie=UTF8&camp=1789&creative=390957&creativeASIN=B00BCXF4JQ&linkCode=as2&tag=octonion-20" target="_blank">Cooler Master Cosmos II - Ultra Full Tower Computer Case</a><br /><br /><div class="separator" style="clear: both; text-align: center;"><a href="http://ecx.images-amazon.com/images/I/91O40bKe-mL._SL1500_.jpg" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img alt="" border="0" height="320" src="http://ecx.images-amazon.com/images/I/91O40bKe-mL._SL1500_.jpg" title="Cooler Master Cosmos II" width="259" /></a></div><br />An absolutely massive case - plenty of room for an E-ATX motherboard, large power supply, multiple video cards and hard drives. <br /><br /></li><li><a href="http://www.amazon.com/gp/product/B008U12856/ref=as_li_ss_tl?ie=UTF8&camp=1789&creative=390957&creativeASIN=B008U12856&linkCode=as2&tag=octonion-20" target="_blank">PC Power & Cooling 1200W Silencer MK III Series Modular Power Supply</a><br /><br /><div class="separator" style="clear: both; text-align: center;"><a href="http://ecx.images-amazon.com/images/I/41-vt3pnUJL.jpg" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img alt="" border="0" height="214" src="http://ecx.images-amazon.com/images/I/41-vt3pnUJL.jpg" title="PC Power & Cooling 1200W Silencer MK III" width="320" /></a></div><br />An exceptionally large power supply, but platinum rated and plenty of room to spare allows it to operate nearly silenty, plus it'll handle any additional video cards you'll add later for GPU computing. <br /><br /></li><li><a href="http://www.newegg.com/Product/Product.aspx?Item=N82E16813132053" target="_blank">ASUS Rampage IV Black Edition LGA 2011 Extended ATX Intel Motherboard</a><br /><br /><div class="separator" style="clear: both; text-align: center;"><a href="http://images10.newegg.com/NeweggImage/productimage/13-132-053-02.jpg" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img alt="" border="0" height="240" src="http://images10.newegg.com/NeweggImage/productimage/13-132-053-02.jpg" title="ASUS Rampage IV Black Edition" width="320" /></a></div><br />Space for 4 video cards, 8 memory sticks (up to 64GB), superb quality and extreme tweakability. <br /><br /></li><li><a href="http://www.amazon.com/gp/product/B00EMHM622/ref=as_li_ss_tl?ie=UTF8&camp=1789&creative=390957&creativeASIN=B00EMHM622&linkCode=as2&tag=octonion-20" target="_blank">Intel i7-4930K LGA 2011 64 Technology Extended Memory CPU</a><br /><br /><div class="separator" style="clear: both; text-align: center;"><a href="http://ecx.images-amazon.com/images/I/91++fq52iVL._SL1500_.jpg" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img alt="" border="0" height="320" src="http://ecx.images-amazon.com/images/I/91++fq52iVL._SL1500_.jpg" title="Intel i7-4930K LGA 2011" width="236" /></a></div><br />Ivy Bridge, 6 cores, exceptional ability to overclock. <br /><br /></li><li><a href="http://www.amazon.com/gp/product/B008RAS8AO/ref=as_li_ss_tl?ie=UTF8&camp=1789&creative=390957&creativeASIN=B008RAS8AO&linkCode=as2&tag=octonion-20" target="_blank">(2) Corsair Dominator Platinum 32GB (4x8GB) DDR3 2133 MHz</a><img alt="" border="0" height="1" src="http://ir-na.amazon-adsystem.com/e/ir?t=octonion-20&l=as2&o=1&a=B008RAS8AO" style="border: none !important; margin: 0px !important;" width="1" /><br /><br /><div class="separator" style="clear: both; text-align: center;"><a href="http://ecx.images-amazon.com/images/I/51H33Y5+3zL.jpg" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img alt="" border="0" height="320" src="http://ecx.images-amazon.com/images/I/51H33Y5+3zL.jpg" title="Corsair Dominator Platinum 32GB (4x8GB)" width="320" /></a></div><br />Total of 64GB. Top-quality, you'll need your RAM in matched sets of 4 to enable quad-channel.<br /><br /></li><li><a href="http://www.amazon.com/gp/product/B00BL8BX7O/ref=as_li_ss_tl?ie=UTF8&camp=1789&creative=390957&creativeASIN=B00BL8BX7O&linkCode=as2&tag=octonion-20" target="_blank">EVGA GeForce GTX TITAN SuperClocked 6GB GDDR5 384bit</a><img alt="" border="0" height="1" src="http://ir-na.amazon-adsystem.com/e/ir?t=octonion-20&l=as2&o=1&a=B00BL8BX7O" style="border: none !important; margin: 0px !important;" width="1" /><br /><br /><div class="separator" style="clear: both; text-align: center;"><a href="http://ecx.images-amazon.com/images/I/71phIxf42XL._SL1200_.jpg" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img alt="" border="0" height="320" src="http://ecx.images-amazon.com/images/I/71phIxf42XL._SL1200_.jpg" title="EVGA GeForce GTX TITAN SuperClocked" width="320" /></a></div><br /> Top-of-the-line NVIDIA Kepler card for GPU computing. 2688 CUDA cores that reach 4800 TFLOPS single-precision and 1600 TFLOPS double-precision.<br /><br /></li><li><a href="http://www.amazon.com/gp/product/B00D1GYNT4/ref=as_li_ss_tl?ie=UTF8&camp=1789&creative=390957&creativeASIN=B00D1GYNT4&linkCode=as2&tag=octonion-20" target="_blank">(2) Seagate NAS HDD 2TB SATA 6GB NCQ 64 MB Cache Bare Drive</a><img alt="" border="0" height="1" src="http://ir-na.amazon-adsystem.com/e/ir?t=octonion-20&l=as2&o=1&a=B00D1GYNT4" style="border: none !important; margin: 0px !important;" width="1" /><br /><br />Solid hard drive; you'll need 2 or more drives for a RAID 10 array.<br /><br /></li><li><a href="http://www.amazon.com/gp/product/B009NB8WRU/ref=as_li_ss_tl?ie=UTF8&camp=1789&creative=390957&creativeASIN=B009NB8WRU&linkCode=as2&tag=octonion-20" target="_blank">Samsung Electronics 840 Pro Series 2.5-Inch 256 GB SATA 6GB/s Solid State Drive</a><img alt="" border="0" height="1" src="http://ir-na.amazon-adsystem.com/e/ir?t=octonion-20&l=as2&o=1&a=B009NB8WRU" style="border: none !important; margin: 0px !important;" width="1" /><br /><br />Small, fast SSD for quick booting and anything bottlenecked by drive read/write times.<br /><br /></li><li><a href="http://www.amazon.com/gp/product/B008O510FW/ref=as_li_ss_tl?ie=UTF8&camp=1789&creative=390957&creativeASIN=B008O510FW&linkCode=as2&tag=octonion-20" target="_blank">Silverstone Tek 3.5-inch to 2 x 2.5-Inch Bay Converter</a><img alt="" border="0" height="1" src="http://ir-na.amazon-adsystem.com/e/ir?t=octonion-20&l=as2&o=1&a=B008O510FW" style="border: none !important; margin: 0px !important;" width="1" /><br /><br />Needed to adapt the SSD to the Cosmos II case.<br /><br /></li><li><a href="http://www.ubuntu.com/" target="_blank">Ubuntu Linux</a><br /><br /><div class="separator" style="clear: both; text-align: center;"><a href="http://design.ubuntu.com/wp-content/uploads/logo-ubuntu_st_no%C2%AE-black_orange-hex.png" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img alt="" border="0" height="226" src="http://design.ubuntu.com/wp-content/uploads/logo-ubuntu_st_no%C2%AE-black_orange-hex.png" title="Ubuntu Linux" width="320" /></a></div><br /> Ubuntu Linux - what else? </li></ol>http://angrystatistician.blogspot.com/2013/11/building-personal-supercomputer.htmlnoreply@blogger.com (Christopher Long)0tag:blogger.com,1999:blog-9149402429183581490.post-1224011523756248298Mon, 16 Sep 2013 06:58:00 +00002013-09-16T00:10:30.580-07:00The Good, the Bad and the Weird: Duels and the Gentleman's Draw<div>As I mentioned in the previous article <a href="http://angrystatistician.blogspot.com/2013/08/the-good-bad-and-weird-duels-truels-and.html">"The Good, the Bad and the Weird: Duels, Truels and Utility Functions"</a>, a classic probability puzzle involves a 3-way duel (called a <a href="http://en.wikipedia.org/wiki/Truel">"truel"</a>).</div><blockquote class="tr_bq">A, B and C are to fight a three-cornered pistol duel. All know that A's chance of hitting his target is 0.3, C's is 0.5, and B never misses. They are to fire at their choice of target in succession in the order A, B, C, cyclically (but a hit man loses further turns and is no longer shot at) until only one man is left unit. What should A's strategy be?</blockquote><div>There's a subtle issue involved in these types of problems in that we don't know how each participant values each outcome. If we allow duelists to deliberately miss there are \(2^3-1=7\) possible outcomes; each person may or may not be shot and at least one person will not be shot. Even if deliberate missing isn't allowed, there are still 3 possible outcomes. A, for example, could conceivably value B winning more than C winning.<br /><br />The classic solution concludes that with the given hit probabilities the optimal strategy for the first trueler is to deliberately miss. My contention is that this is an incomplete solution; for some sets of utility values this may not be the first trueler's optimal strategy. Let's examine duels using utility values as the first step towards addressing truels.<br /><br /></div><div>Let the two duelers have hit probabilities \( p_i \) for \( i=1,2\), and also let the values each assigns to a dueling win, loss or tie be \( W_i, L_i, T_i\). I'm assuming these values are all known to both duelers and that \( W_i > T_i > L_i\). Note that a strategy that optimizes expected utility is preserved under a <a href="http://www.econ.ucsb.edu/~tedb/Courses/Ec100C/VarianExpectedUtility.pdf%E2%80%8E" target="_blank">positive affine transformation</a>, so under the assumption that all values are finite, we may assume \(W_i=1\) and \(L_i=0\) for all \(i\).<br /><br />We'll declare the duel a "gentleman's draw" if both duelers deliberately miss in a single round. Let the expected utility value for dueler \(i\) be \(U_i\).<br /><br />Trivially, if the optimal strategy for dueler 1 is trying for a hit, the optimal strategy for dueler 2 must be to also try for a hit. Conversely, the optimal strategy for dueler 1 can be to deliberately miss if and only if the optimal strategy for dueler 2 is also to deliberately miss. Assume the draw is taken if there's indifference (they are, after all, gentlemen).<br /><br />If dueler 1 deliberately misses, dueler 2 has two choices. If he deliberately misses, it's a gentleman's draw and \(U_2 = T_2\); if he tries for a hit, both duelers will subsequently try to hit each other and \(U_2 = p_2 + q_2 q_1 U_2 \). Solving, we get \[U_2 = \frac{p_2}{1-q_1 q_2}.\] It's therefore optimal for dueler 2 to take the gentleman's draw if and only if \[T_2 \geq \frac{p_2}{1-q_1 q_2}.\] As a consequence, dueler 1 will not deliberately miss if \[T_2 < \frac{p_2}{1-q_1 q_2}.\] If dueler 1 tries for a hit, both will subsequently try to hit each other and we have \( U_1 = p_1 + q_1 q_2 U_1\). Solving, we get \[ U_1 = \frac{p_1}{1-q_1 q_2}.\] Thus, the optimal strategy for dueler 1 is to deliberately miss if and only if \[ T_1 \geq \frac{p_1}{1-q_1 q_2}\] and \[T_2 \geq \frac{p_2}{1-q_1 q_2}.\] These are, as expected, precisely the conditions under which dueler 2 will deliberately miss.<br /><br />For example, if \(p_1=p_2=1/2\), it'll be a gentleman's draw if and only if \(T_1,T_2 \geq 2/3\). Paradoxically, both will fire under these hit probabilities if \( T_1 = 4/5 \) and \( T_2 = 1/2 \), even though this results in lower expected utility for both duelers than if they had agreed to a draw. This is a type of <a href="http://en.wikipedia.org/wiki/Prisoner's_dilemma" target="_blank">prisoner's dilemma</a>.</div>http://angrystatistician.blogspot.com/2013/09/the-good-bad-and-weird-duels-and.htmlnoreply@blogger.com (Christopher Long)0tag:blogger.com,1999:blog-9149402429183581490.post-5812573797954626341Sun, 11 Aug 2013 09:10:00 +00002013-08-11T02:10:20.672-07:00A Man, a Plan, a Cat, a Canal - Panama!A man, a plan, a cat, a master, a minae, a lunula, a tora, a vela, a cola, a belga, a tola, an uta, a dona, a tiara, an aura, a kata, a mana, a nipa, a cabana, a law, a harem, a taces, a liard, a gnat, a mut, a dawed, a bura, a nim, a levator, an eh, a trams, a sall, a vacates, an up, a garb, a keets, a serin, a sored, a patamar, a gater, a lees, a strep, a lama, a pat, a lame, a haded, a wag, a naan, a kalian, a fer, a yaws, a remit, a keep, a loom, a neep, a ha, an ai, a retailor, a cabals, a subah, a slits, a pated, a callet, a xu, a faena, a tups, a sati, a rapider, a reps, a rapid, a korat, a dens, a sled, a tical, a pipages, a rats, a seeks, a defi, a wasts, a peracid, a nomas, a rosarian, a bran, a nonet, a gama, a sac, a scrota, a lab, a cain, an aid, a statin, a reel, a tiff, a haj, a talas, a nares, a mares, a laics, an allot, a til, an arena, a nits, a palace, a fanum, a yarer, a daub, a basts, a masas, a called, a slay, a galoots, a senega, a ton, a span, a mum, a dracaenas, a coda, a must, a satis, a lion, a yob, a reknit, a dessert, a lex, an aver, a red, a fab, a katas, a deet, a diaper, a nos, a snips, a spot, a fed, a wap, a past, a galax, an apos, a zaps, a lat, a rai, a resale, a tallit, a boob, a hag, a fayed, a sports, a loof, a ragg, a sashay, an agar, a selah, a spit, a brad, a strap, a wolf, a reified, a navar, a cases, a bedaub, a tassels, a gaes, a runaway, an amahs, a burg, a sorter, a del, a basely, a kae, an evades, a lader, a waved, a spirts, a pas, a slam, a hasps, a nalas, a snog, a cedar, an ulama, a lasagnas, a leud, a poh, a larum, a si, a retained, a pay, a pit, a raps, a repots, a sell, a datums, a yard, a hey, a terces, a prod, a skips, a gip, a lin, a sib, a lac, a jaded, a later, a beef, a sallet, a panicum, a gnar, a pall, a gas, a rotator, a tamis, a gel, a snawed, a decaf, a pol, a jawans, a dual, a gateman, a tins, a ran, a riff, a za, a minim, a get, a bard, a spud, a sum, a stem, a rases, a tsar, a lobate, a hasp, a hwan, a lotahs, an agas, a pad, a reed, a slim, an attar, a steels, a medina, a gar, a tub, an asarum, a pows, an ava, a mib, a map, a stab, a rasp, a tala, a nadir, a seton, a stoped, a repoll, a waste, a hades, a ragi, a tsades, a bap, a rain, a ratels, a hasten, a gid, a snot, a basal, a vinas, an upas, a nonacid, a naval, a lahar, a sonar, a lovat, a wares, a bal, a ripsaw, a jatos, a sup, a sh, a haed, a hailed, a moor, a vaned, a delf, a revel, a tan, a meter, a sob, an evil, a deific, a peracids, an ass, a bait, a buran, a far, a lamas, a robalo, a gaum, a sha, an arak, a macer, a macs, a snub, a stirps, a deffer, a perp, a naw, a jar, a vain, a ban, a miladi, a rasps, a wallop, a rex, a waney, a rase, a caird, a patrons, a tun, a stob, a japans, a maws, an awa, a con, a spoons, a fir, a stroma, a mil, a keen, a lace, a canon, an allod, a spay, a ten, a kop, a yah, a regal, a modal, a dinar, a madame, a hallahs, a dor, a repel, a reflet, a noma, a gen, a fast, a waggon, a dewan, a recap, a wort, a pin, a sung, a bag, a sos, a tepal, an abash, an ami, a manias, a pees, a diols, a repp, a taxed, a deb, a spans, a reman, a reward, a poons, a smug, a now, a teff, a hakeem, a seem, a rami, a simaruba, a ness, a cadets, a teraohm, a knob, a sewan, a seven, an oot, a skeets, a demur, a lanai, an unassuaged, a stared, a seam, a meed, a naos, a bus, a wad, a xis, a hon, a dan, a snail, a kayak, a looter, a gob, a men, a sorb, an inner, an oxen, an emes, a seder, a reknits, a loid, a saw, a pint, a castes, a stop, a debut, a tows, a tog, a kips, a rep, a cask, a yam, a minareted, a reflow, a laced, a bun, a keels, an ort, an assent, a faun, a reign, a manat, a rakus, a lade, a dater, a minarets, a nay, a fanos, a matt, an arhat, a tram, a tel, a vat, a ba, a pal, a pager, a tarps, a plug, a nod, a rash, a rotas, a ramal, a butane, a galiot, a paws, a pals, a stressed, a derat, an um, a mac, a sri, a mas, a lagan, an ajar, a laid, a big, a lass, a vara, a mail, a sikas, a yawl, an add, a simas, a romano, a kalam, a tannin, a less, a tater, a carer, a pasts, a caseic, a papayas, a salal, a haffits, a snivel, an atlas, a bases, a macaber, a tress, an alerts, a water, an imaginer, a pec, a golf, a pupils, a gnaws, a mos, a la, a baklawa, a gan, a stums, a step, an ikat, a keel, a dot, a loll, a tares, a renamed, a nanas, a sematic, a tap, a zareeba, a mob, an ales, a lox, a tass, a matsah, a ports, a names, a batik, a secs, a fane, a haleru, a lasts, a fats, a valley, a spam, a spools, a saner, an aspis, a hun, a stops, a tots, a tils, a gamely, a put, a las, a vaw, a ranid, a ham, an ataps, a barca, a gig, a strow, a struts, a speed, a draw, an axon, a gaga, a birr, an agnail, a pow, a tuber, a teel, a rip, a tail, a tatar, a hay, a ray, a lall, a barege, a janitor, a caff, an auk, a hah, an areic, a raster, a casks, a basil, a bubal, a donas, a bananas, a lobar, a lades, a cadi, a caw, a tad, an ogam, a janes, a sirup, a cram, a lets, a passus, a nuts, a muton, a cyma, a mix, a madam, a let, a corban, a partan, a melanin, a top, a wot, a bad, a bows, a resod, a nae, a parol, a vastness, an anime, a hares, a rax, a lam, a tabbis, a bins, a liar, a straw, a pant, a cares, a papa, a res, a sass, a mattes, a nopal, a daff, a yapon, a toped, an ivy, a maven, a ganef, a sovran, a gorp, a divan, a dexes, a moot, a prat, a nips, a redips, a mood, a sabir, a zarebas, a shales, a nett, a papal, a japan, a swots, a seral, a sip, a tarots, a path, a baseness, a callan, a drib, a nairu, a salep, a last, a babuls, a kiths, an orc, a daps, a galoot, a stink, a sail, an ameer, a haslets, a pats, a medal, a ganof, a dim, an atoll, a hall, a feral, a robalos, a sin, a radioed, a cycases, a kuna, an as, an arepa, a ki, a haps, an asks, a mair, a saloops, a vug, a stang, a tophs, a dak, a yay, a serac, a vav, a lap, a gad, a laden, a manioc, a fin, a dub, a loons, a teemer, a sris, a soar, a matador, a saros, a vavassor, a kayos, a setal, a maid, a waw, a palet, a rayon, a solon, a dine, a rumaki, a hae, a gaff, a lipases, a ladder, a stub, an akees, a seller, a manats, a barytas, a lied, a nona, a mars, an ait, a suber, a cimex, a tau, a vail, a grana, a pan, a sums, a ramadas, a rood, a teels, a muts, a torr, a paeon, a sow, an alfa, a rob, a mures, a toper, a rerun, a mall, a canoe, an apart, a seer, a saps, a wait, a laxer, a cats, a manor, a baton, a tag, a strown, a fax, a wab, a jauk, a racial, a lev, a gasser, a careens, a swat, a rein, a wanigan, a tenner, a flora, a rolf, a rennet, an agin, a wanier, a taws, a sneer, a caress, a gavel, a laic, a raku, a jab, a wax, a fanworts, a gat, a not, a baron, a mast, a carex, a lati, a wasp, a sarees, a trap, an aeon, a call, a manurer, a repot, a serum, a bora, a flan, a wos, an oe, a parrot, a stum, a sleet, a door, a sad, a marasmus, a napa, an argali, a vau, a taxemic, a rebus, a ti, an asrama, an on, a deil, a satyr, a bast, an amarelles, a seek, an abuts, a redd, a lases, a pilaff, a gae, a haik, a muraenid, a nolos, an oy, a ratel, a paw, a wadi, a malates, a soy, a kaross, a vavasor, a sarod, a tamaraos, a sirs, a remeet, a snool, a bud, an if, a coin, a maned, a lad, a gap, a lav, a vac, a resay, a yak, a dashpot, a gnats, a guv, a spool, a sari, a masks, an asp, a haika, a per, an asana, an ukases, a cycadeoid, a ranis, a sol, a boral, a refall, a hallot, an amid, a fon, a gal, a demast, a pastels, a hareem, an alias, a knits, a tool, a gasp, a dacron, a shtik, a slub, a bats, a lapel, a saurian, a bird, an all, a cassenes, a baht, a pastor, a tapis, a lares, a stows, a nap, a jalap, a patten, a selahs, a saber, a zaribas, a doom, a spider, a spin, a tarp, a toom, a sexed, an avid, a prog, an arvos, a fen, a ganev, a mayvin, a depot, an op, a yaff, a dalapon, a sett, a massas, a sera, a pap, a ser, a catnap, a warts, a rail, a snib, a sibb, a tam, a lax, a raser, a haemin, an assents, a valor, a paean, a doser, a swob, a dab, a tow, a pot, an in, a leman, a trapan, a brocatel, a mad, a maxima, a myc, a notum, a stun, a suss, a pastel, a marc, a puris, a sen, a jam, a gonad, a taw, a caid, a cased, a lar, a bolas, an anabas, a nodal, a bubalis, a basks, a carets, a racier, a nah, a haku, a naff, a carotin, a jaeger, a ball, a lay, a rayah, a rat, a tali, a tapir, a leet, a rebut, a wop, a liang, an arriba, a gag, an ox, an award, a deeps, a sturts, a worts, a giga, a crab, a spat, an amah, a din, a raw, a vasal, a tup, a ylem, a gaslit, a stot, a spots, an uh, a sips, an arenas, a sloops, a maps, a yell, a vast, a fasts, a laurel, a haen, a fasces, a kit, a baseman, a strop, a hast, a mass, a taxol, a sel, an aboma, a beer, a zap, a tacit, a mesas, an anadem, an eraser, a tallol, a tod, a leek, a takin, a pets, a smuts, an aga, a walk, a baal, a som, a swang, a slipup, a flog, a cep, a renig, a minaret, a wastrel, an assert, a reb, a camases, a basalt, an alevins, a stiff, a halalas, a say, a papacies, a casts, a parer, a caret, a tassel, an inn, a tamal, a kaon, a moras, a misadd, an alway, a sakis, a li, a maar, a vassal, a gib, a dial, a raj, a nan, a galas, a mairs, a cam, a mun, a tared, a desserts, a slap, a swap, a toil, a gaen, a tubal, a maras, a torahs, a radon, a gulp, a sprat, a reg, a palapa, a bat, a valet, a mart, a tahr, an att, a mason, a fay, an aster, an imaret, a daedal, a suk, a ratan, a mangier, a nu, a fatness, a natron, a sleek, a nub, a decal, a wolfer, a deter, an imam, a yaks, a caper, a spik, a got, a swot, a tubed, a pots, a sets, a catnip, a was, a diol, a stinker, a redes, a semen, an exon, a rennin, a bros, an em, a bog, a retool, a kay, a kalians, an ad, a noh, a six, a daw, a sub, a so, an adeem, a maes, a derats, a degauss, an unai, an alarumed, a steeks, a toon, a neves, an awes, a bonk, a mho, a retasted, a cassena, a bur, a misaim, a ramees, a meek, a haffet, a won, a gums, a snoop, a drawer, a namer, a snaps, a bed, a dex, a tapper, a sloid, a seep, a sain, a maim, an ahs, a banal, a petasos, a gab, a gnus, a nip, a trow, a pacer, an awed, a nogg, a wats, a fanega, a mon, a telfer, a leper, a rod, a shall, a haem, a damar, a nidal, a domal, a gerah, a yapok, a net, a yaps, a doll, an anon, a caecal, an eek, a lima, a morts, a rif, a snoops, an oca, a wan, a swam, a snap, a jabots, a nut, a snort, a padri, a caesar, a yen, a waxer, a poll, a wasps, a raid, a liman, a bani, a var, a jawan, a prep, a reffed, a sprits, a buns, a scam, a rec, a makar, an aahs, a mu, a gaol, a boras, a malar, a fan, a rubati, a bass, an asdic, a repacified, a liven, a bos, a retem, an at, a lever, a fled, a den, a varoom, a deli, a hade, a hahs, a pus, a sot, a jaw, a spiral, a baser, a wat, a volar, an osar, a halal, a vanadic, a nonas, a punas, a nival, a sabatons, a dig, a nets, a haslet, a rani, a rap, a based, a staig, a rased, a haets, a walloper, a depots, a notes, a rid, an aal, a taps, a rabats, a pam, a bima, a van, a swop, a muras, an abut, a raga, an idem, a sleets, a rattan, a mils, a deer, a dap, a sag, an ash, a tolan, a whaps, a haet, a bolar, a stases, a ramets, a mus, a dups, a drab, a teg, a minima, a zaffir, an ar, a snit, a nametag, a laud, a snaw, a jalop, a faced, a dewans, a leg, a sim, a tarot, a toras, a gall, a parang, a mucin, a patellas, a feeb, a ret, a laded, a jacal, a bis, a nil, a pig, a spiks, a dorp, a secret, a yeh, a dray, a smut, a dalles, a stoper, a spar, a tip, a yap, a deni, a terais, a mural, a hop, a duel, a sang, a salaam, a lunar, a decagons, a sal, an asps, a hamals, a sap, a strips, a dev, a wared, a lased, a venae, a kayles, a baled, a retros, a grub, a shaman, a yaw, an urase, a gasless, a tabu, a debases, a caravan, a deifier, a flow, a parts, a darb, a tips, a hales, a rag, an ayahs, a saggar, a fool, a strops, a dey, a fag, a haboob, a till, a tael, a serai, a ratal, a spaz, a sop, an axal, a gats, a papaw, a def, a tops, a spins, a son, a repaid, a teed, a sat, a kab, a fader, a rev, an axel, a tressed, a tinker, a boy, a noil, a sit, a satsuma, a doc, a sane, a cardamum, a naps, a nota, a genes, a stool, a gayals, a dell, a casas, a masts, a babu, a darer, a yamun, a faecal, a pastina, an er, an alit, a toll, an asci, a laser, a maser, a nasal, a taj, a haffit, a leer, a nit, a tsadi, an ani, a cabala, a torcs, a casa, a mag, a tenon, an arb, an air, a soras, a monadic, a repasts, a waifed, a skees, a star, a seg, a pipal, a citadels, a sned, a tarok, a dip, a rasper, a redip, a raitas, a sputa, a ne, a faux, a tell, a cadet, a pastils, a habus, a slab, a caroli, a terai, an aah, a peen, a mool, a peek, a timer, a sway, a ref, a nail, a kana, a nag, a waded, a haemal, a tapa, a malaperts, a seel, a retag, a ram, a tapaderos, an ires, a steek, a brag, a pun, a set, a cavallas, a smart, a hen, a rot, a velamina, a rub, a dew, a datum, a tang, a drail, a sec, a tamer, a hawala, an abaca, a pina, an ama, a taka, a ruana, a raita, an oda, a tuna, a lota, a gleba, a loca, a leva, a rota, a lunulae, an imarets, a mat, a canal - Panama!http://angrystatistician.blogspot.com/2013/08/a-man-plan-cat-canal-panama.htmlnoreply@blogger.com (Christopher Long)0tag:blogger.com,1999:blog-9149402429183581490.post-3974033514939034397Sun, 11 Aug 2013 06:03:00 +00002013-08-11T15:34:21.222-07:00The Good, the Bad and the Weird: Duels, Truels and Utility FunctionsIn the excellent (and highly recommended) book <a href="http://www.amazon.com/Challenging-Problems-Probability-Solutions-Mathematics/dp/0486653552" target="_blank">"Fifty Challenging Problems in Probability with Solution"</a>, Frederick Mosteller poses "The Three-Cornered Duel":<br /><blockquote class="tr_bq">A, B and C are to fight a three-cornered pistol duel. All know that A's chance of hitting his target is 0.3, C's is 0.5, and B never misses. They are to fire at their choice of target in succession in the order A, B, C, cyclically (but a hit man loses further turns and is no longer shot at) until only one man is left unit. What should A's strategy be?</blockquote>This is problem 20 in Mosteller's book, and it also appears (with an almost identical solution) in Larsen & Marx <a href="http://www.amazon.com/Introduction-Mathematical-Statistics-Its-Applications/dp/0321693949" target="_blank">"An Introduction to Probability and Its Applications"</a>.<br /><br />Mosteller's solution:<br /><div><blockquote class="tr_bq">A is naturally not feeling cheery about this enterprise. Having the first shot he sees that, if he hits C, B will then surely hit him, and so he is not going to shoot at C. If he shoots at B and misses him, then B clearly shoots the more dangerous C first, and A gets one shot at B with probability 0.3 of succeeding. If he misses this time, the less said the better. On the other hand, suppose A hits B. Then C and A shoot alternately until one hits. A's chance of winning is \( (.5)(.3) + (.5)^2(.7)(.3) + (.5)^3(.7)^2(.3) + \ldots\) . Each term corresponds to a sequence of misses by both C and A ending with a final hit by A. Summing the geometric series we get ... \(3/13 < 3/10\). Thus hitting B and finishing off with C has less probability of winning for A than just missing the first shot. So A fires his first shot into the ground and then tries to hit B with his next shot. C is out of luck.</blockquote>Is this right? What if B were to follow A's example and fire into the ground? What if all three were to keep firing into the ground? That this type of an outcome isn't unreasonable for certain sets of shot accuracy probabilities can be illustrated by considering the case where A's accuracy is 0.98, B's accuracy is 1.0 and C's accuracy is 0.99. Mosteller's argument is equally applicable in this case, but if B shoots C after A deliberately misses he'll be shot by A with probability 0.98. Is that reasonable?<br /><br />Under the assumption that deliberately missing is allowed, there are \(2^3-1=7\) possible outcomes - each participant can be shot or not shot, and there must be at least one participant not shot. The lack of clarity for what the ideal strategies are for A, B and C in the general case arises from the utility of 2- or 3-way ties to each of the participants being undefined.<br /><br />In the next article I'll analyze 2-way duels where deliberate missing is allowed by using such fully-defined utility functions. These results will be used in a third article on 3-way duels (truels); in particular, I'll re-examine Mosteller's solution.</div>http://angrystatistician.blogspot.com/2013/08/the-good-bad-and-weird-duels-truels-and.htmlnoreply@blogger.com (Christopher Long)0tag:blogger.com,1999:blog-9149402429183581490.post-2191741530849988465Fri, 09 Aug 2013 08:23:00 +00002013-08-10T01:21:01.004-07:00Solving a Bar Bet Using at Most Three Different OperationsA bar bet as presented in the YouTube video <a href="http://www.youtube.com/watch?v=DoRB7FL02t4" target="_blank">The HARDEST Puzzle Yet!</a> involves starting with three of the same number from 0 through 9, then adding mathematical operations that result in an evaluation of 6. For example, if we start with three of the number 6, one solution could be \[ 6+6-6=6 .\] I'll demonstrate a method for solving this bar bet puzzle starting with three of any number, say \( N \), which involves using at most three different mathematical operations (although some of these may be used many, many times).<br /><br />If \( 0 \leq N \leq 2 \) we have the solutions<br />\begin{eqnarray}<br />(0! + 0! + 0!)! &= 6 \\<br />(1! + 1! + 1!)! &= 6 \\<br />2+2+2 &= 6.<br />\end{eqnarray} If \( N\geq 3\), concatenate the three numbers. Repeatedly applying the square-root operation we'll eventually end up with a result \( x \) with \( 3 \leq x < 9\). If we now take the greatest integer \(\lfloor x \rfloor\) we have an integer \( n \) with \( 3 \leq n \leq 8 \). If we can exhibit solutions for each of these cases that use only square-roots, greatest integers and one other operation, we'll be done. Using factorial for the third operation, some possibilities are<br />\begin{eqnarray}<br />\href{http://www.wolframalpha.com/input/?i=3%21}{3!} &= 6\\<br />\href{http://www.wolframalpha.com/input/?i=%5Cleft%5Clfloor+%5Csqrt%7B%5Csqrt%7B%5Cleft%5Clfloor+%5Csqrt%7B%5Csqrt%7B%5Csqrt%7B%5Csqrt%7B%5Csqrt%7B%284%21%29%21%7D%7D%7D%7D%7D+%5Cright%5Crfloor%21%7D%7D+%5Cright%5Crfloor+%21}{\left\lfloor \sqrt{\sqrt{\left\lfloor \sqrt{\sqrt{\sqrt{\sqrt{\sqrt{(4!)!}}}}} \right\rfloor!}} \right\rfloor !} &= 6\\<br />\href{http://www.wolframalpha.com/input/?i=%5Cleft%5Clfloor+%5Csqrt%7B%5Csqrt%7B5%21%7D%7D+%5Cright%5Crfloor+%21}{\left\lfloor \sqrt{\sqrt{5!}} \right\rfloor !} &= 6\\<br />\href{http://www.wolframalpha.com/input/?i=6}{6} &= 6 \\<br />\href{http://www.wolframalpha.com/input/?i=%5Cleft%5Clfloor+%5Csqrt%7B%5Csqrt%7B%5Csqrt%7B%5Cleft%5Clfloor+%5Csqrt%7B%5Csqrt%7B7%21%7D%7D%5Cright%5Crfloor+%21%7D%7D%7D+%5Cright%5Crfloor+%21}{\left\lfloor \sqrt{\sqrt{\sqrt{\left\lfloor \sqrt{\sqrt{7!}}\right\rfloor !}}} \right\rfloor !} &= 6\\<br />\href{http://www.wolframalpha.com/input/?i=%5Cleft%5Clfloor+%5Csqrt%7B%5Csqrt%7B%5Csqrt%7B8%21%7D%7D%7D+%5Cright%5Crfloor+%21}{\left\lfloor \sqrt{\sqrt{\sqrt{8!}}} \right\rfloor !} &= 6.<br />\end{eqnarray}<br />I've added Wolfram Alpha links so you can verify that these do indeed evaluate to 6.<br /><br />As an illustrative example, when \(N=1337\) we have the solution \[<br />\href{http://www.wolframalpha.com/input/?i=%5Cleft%5Clfloor+%5Csqrt%7B%5Csqrt%7B%5Cleft%5Clfloor+%5Csqrt%7B%5Csqrt%7B%5Csqrt%7B%5Csqrt%7B%5Csqrt%7B%28++%5Cleft%5Clfloor+%5Csqrt%7B%5Csqrt%7B%5Csqrt%7B%5Csqrt%7B133713371337%7D%7D%7D%7D+%5Cright%5Crfloor+%21%29%21%7D%7D%7D%7D%7D+%5Cright%5Crfloor%21%7D%7D+%5Cright%5Crfloor+%21}{\left\lfloor \sqrt{\sqrt{\left\lfloor \sqrt{\sqrt{\sqrt{\sqrt{\sqrt{\left( \left\lfloor \sqrt{\sqrt{\sqrt{\sqrt{133713371337}}}} \right\rfloor !\right)!}}}}} \right\rfloor!}} \right\rfloor !} = 6.\]http://angrystatistician.blogspot.com/2013/08/solving-bar-bet-using-at-most-three.htmlnoreply@blogger.com (Christopher Long)0