Skip to main content

Posts

Showing posts from 2017

Probability and Cumulative Dice Sums

Notes on Setting up a Titan V under Ubuntu 17.04

I recently purchased a Titan V GPU to use for machine and deep learning, and in the process of installing the latest Nvidia driver's hosed my Ubuntu 16.04 install. I was overdue for a fresh install of Linux, anyway, so I decided to upgrade some of my drives at the same time. Here are some of my notes for the process I went through to get the Titan V working perfectly with TensorFlow 1.5 under Ubuntu 17.04. Old install: Ubuntu 16.04 EVGA GeForce GTX Titan SuperClocked 6GB 2TB Seagate NAS HDD + additional drives New install: Ubuntu 17.04 Titan V 12GB / partition on a 250GB Samsung 840 Pro SSD (had an extra around) /home partition on a new 1TB Crucial MX500 SSD New WD Blue 4TB HDD + additional drives You'll need to install Linux in legacy mode, not UEFI, in order to use Nvidia's proprietary drivers for the Titan V. Note that Linux will cheerfully boot in UEFI mode, but will not load any proprietary drivers (including Nvidia's). You'll need proprietary d

Solving IMO 1989 #6 using Probability and Expectation

IMO 1989 #6: A permutation \(\{x_1, x_2, \ldots , x_m\}\) of the set \(\{1, 2, \ldots , 2n\}\), where \(n\) is a positive integer, is said to have property \(P\) if \( | x_i - x_{i+1} | = n\) for at least one \(i\) in \(\{1, 2, ... , 2n-1\}\). Show that for each \(n\) there are more permutations with property \(P\) than without. Solution: We first observe that the expected number of pairs \(\{i, i+1\}\) for which \( | x_i - x_{i+1} | = n\) is \(E = 1\). To see this note if \(j\), \( 1 \leq j \leq n\), appears in position \(1\) or \(2n\) it's adjacent to one number, otherwise two. Thus the probability it's adjacent to its partner \(j+n\) in a random permutation is \[\begin{equation} \eqalign{ e_j &= \frac{2}{2n}\cdot \frac{1}{2n-1} + \frac{2n-2}{2n}\cdot \frac{2}{2n-1} \\ &= \frac{2(2n-1)}{2n(2n-1)} \\ &= \frac{1}{n}. } \end{equation}\] By linearity of expectation we overall have the expected number of \(j\) adjacent to its partner \(j+n\) is \(\sum_{j=1}^{

Poisson Games and Sudden-Death Overtime

Let's say we have a game that can be reasonably modeled as two independent Poisson processes with team \(i\) having parameter \(\lambda_i\). If one team wins in regulation with team \(i\) scoring \(n_i\), then it's well-known we have the MLE estimate \(\hat{\lambda_i}=n_i\). But what if the game ends in a tie in regulation with each team scoring \(n\) goals and we have sudden-death overtime? How does this affect the MLE estimate for the winning and losing teams? Assuming without loss of generality that team \(1\) is the winner in sudden-death overtime. As we have two independent Poisson processes, the probability of this occurring is \(\frac{\lambda_1}{\lambda_1 + \lambda_2}\). Thus, the overall likelihood we'd like to maximize is \[L = e^{-\lambda_1} \frac{{\lambda_1}^n}{n!} e^{-\lambda_2} \frac{{\lambda_2}^n}{n!} \frac{\lambda_1}{\lambda_1 + \lambda_2}.\] Letting \(l = \log{L}\) we get \[l = -{\lambda_1} + n \log{\lambda_1} - {\lambda_2} + n \log{\lambda_2} - 2 \log{n!}

Why does Kaggle use Log-loss?

If you're not familiar with Kaggle , it's an organization dedicated to data science competitions to both provide ways for companies to potentially do analytics at less cost, as well as to identify talented data scientists. Competitions are scored using a variety of functions, and the most common for binary classification tasks with confidence is something called log-loss, which is essentially \(\sum_{i=1}^{n} \log(p_i)\), where \(p_i\) is your model's claimed confidence for test data point \(i\)'s correct label. Why does Kaggle use this scoring function? Here I'll follow Terry Tao's argument . Ideally what we'd like is a scoring function \(f(x)\) that yields the maximum expected score precisely when the claimed confidence \(x_i\) in the correct label for \(i\) is actually what the submitter believes is the true probability (or frequency) of that outcome. This means that we want \[L(x)=p\cdot f(x) + (1-p)\cdot f(1-x)\] for fixed \(p\) to be maximized when

The Kelly Criterion and a Sure Thing

The Kelly Criterion is an alternative to standard utility theory, which seeks to maximize expected utility. Instead, the Kelly Criterion seeks to maximize expected growth . That is, if we start out with an initial bankroll \(B_0\), we seek to maximize \(\mathrm{E}[g(t)]\), where \(B_t = B_0\cdot e^{g(t)}\). As a simple example, consider the following choice. We can have a sure $3000, or we can take the gamble of a \(\frac{4}{5}\) chance of $4000 and a \(\frac{1}{5}\) chance of $0. What does Kelly say? Assume we have a current bankroll of \(B_0\). After the first choice we have \(B_1 = B_0+3000\), which we can write as \[\mathrm{E}[g(1)] = \log\left(\frac{B_0+3000}{B_0}\right);\]for the second choice we have \[\mathrm{E}[g(1)] = \frac{4}{5} \log\left(\frac{B_0+4000}{B_0}\right).\]And so we want to compare \(\log\left(\frac{B_0+3000}{B_0}\right)\) and \(\frac{4}{5} \log\left(\frac{B_0+4000}{B_0}\right)\). Exponentiating, we're looking for the positive root of \[{\left({B_0+3000}\

Prime Divisors of \(3^{32}-2^{32}\)

Find four prime divisors < 100 for \(3^{32}-2^{32}\). Source: British Math Olympiad, 2006. This factors nicely as \(3^{32}-2^{32} = \left(3^{16}+2^{16}\right)\left(3^{16}-2^{16}\right)\), and we can continue factoring in this way to get \[3^{32}-2^{32} = \left(3^{16}+2^{16}\right)\left(3^8+2^8\right)\left(3^4+2^4\right)\left(3^2+2^2\right)\left(3^2-2^2\right).\]The final three terms are \(5, 13, 97\), so we have three of the four required primes. For another prime divisor, consider \(3^{16}-2^{16}\). By Fermat's Little Theorem \(a^{16}-1\equiv 0 \bmod 17\) for all \(a\) with \((a,17)=1\), and so it follows that \(3^{16}-2^{16}\equiv 0 \bmod 17\), and we therefore have \(17\) as a fourth such prime divisor. Alternatively, note \( \left(\dfrac{3}{17}\right)=-1, \left(\dfrac{2}{17}\right)=1\), hence by Euler's Criterion \(3^8\equiv -1 \bmod 17\) and \(2^8\equiv 1 \bmod 17\), giving \(3^8+2^8\equiv 0\bmod 17\).

Highest Powers of 3 and \(\left(1+\sqrt{2}\right)^n\)

Let \(\left(1+\sqrt{2}\right)^{2012}=a+b\sqrt{2}\), where \(a\) and \(b\) are integers. What is the greatest common divisor of \(b\) and \(81\)? Source: 2011-2012 SDML High School 2a, problem 15. Let \((1+\sqrt{2})^n = a_n + b_n \sqrt{2}\). I've thought about this some more, and there's a nice way to describe the highest power of \(3\) that divides \(b_n\). This is probably outside of the scope of the intended solution, however. First note that \((1-\sqrt{2})^n = a_n - b_n \sqrt{2}\), and so from \((1+\sqrt{2})(1-\sqrt{2})=-1\) we get \((1+\sqrt{2})^n (1-\sqrt{2})^n = {(-1)}^n\). This gives \[{a_n}^2 - 2 {b_n}^2 = {(-1)}^n.\] Now define the highest power of a prime \(p\) that divides \(n\) to be \(\operatorname{\nu}_p(n)\). From cubing and using the above result it's straightforward to prove that if \(\operatorname{\nu}_3(b_n) = k > 0\) then \(\operatorname{\nu}_3(b_{3n}) = k+1\). Note \((1+\sqrt{2})^4 = 17 + 12\sqrt{2} \equiv -1+3\sqrt{2} \pmod{3^2}\). Cubing and

Sum of Two Odd Composite Numbers

What is the largest even integer that cannot be written as the sum of two odd composite numbers? Source: AIME 1984 , problem 14. Note \(24 = 3\cdot 3 + 3\cdot 5\), and so if \(2k\) has a representation as the sum of even multiples of 3 and 5, say \(2k = e_3\cdot 3 + e_5\cdot 5\), we get a representation of \(2k+24\) as a sum of odd composites via \(2k+24 = (3+e_3)\cdot 3 + (5+e_5)\cdot 5\). But by the Frobenius coin problem every number \(k > 3\cdot 5 -3-5 = 7\) has such a representation, hence every number \(2k > 14\) has a representation as the sum of even multiples of 3 and 5. Thus every number \(n > 24+14=38\) has a representation as the sum of odd composites. Checking, we see that \(\boxed{38}\) has no representation as a sum of odd composites.