Skip to main content

Posts

Recent posts

Gambling to Optimize Expected Median Bankroll

Gambling to optimize your expected bankroll mean is extremely risky, as you wager your entire bankroll for any favorable gamble, making ruin almost inevitable. But what if, instead, we gambled not to maximize the expected bankroll mean, but the expected bankroll median? Let the probability of winning a favorable bet be \(p\), and the net odds be \(b\). That is, if we wager \(1\) unit and win, we get back \(b\) units (in addition to our wager). Assume our betting strategy is to wager some fraction \(f\) of our bankroll, hence \(0 \leq f \leq 1\). By our assumption, our betting strategy is invariant with respect to the actual size of our bankroll, and so if we were to repeat this gamble \(n\) times with the same \(p\) and \(b\), the strategy wouldn't change. It follows we may assume an initial bankroll of size \(1\). Let \( q = 1-p \). Now, after \(n\)  such gambles our bankroll would have a binomial distribution with probability mass function \[ \Pr(k,n,p) = \binom{n}{k} p^k q^{

An Island of Liars is an Ensemble of Experts

In my previous post I looked at how a group of experts may be combined into a single, more powerful, classifier which I call NaiveBoost  after the related AdaBoost . I'll illustrate how it can be used with a few examples. As before, we're faced with making a binary decision, which we can view as an unknown label \( L \in \{ +1, -1 \}\). Furthermore, the prior distribution on \( L \) is assumed to be uniform. Let our experts' independent probabilities be \( p_1 = 0.8, p_2 = 0.7, p_3 = 0.6\) and \(p_4 = 0.5\). Our combined NaiveBoost classifier is \[ C(S) = \sum_i \frac{L_i}{2}\log{\left( \frac{p_i}{1-p_i}\right)},\] where \( S = \{ L_i \} \). A few things to note are that \( \log{\left( \frac{p_i}{1-p_i}\right)} \) is \( {\rm logit}( p_i )\), and an expert with \( p = 0.5 \) contributes 0 to our classifier. This latter observation is what we'd expect, as \( p = 0.5 \) is random guessing. Also, experts with probabilities \( p_i \) and \( p_j \) such that \( p_i = 1 - p_

Combining Expert Opinions: NaiveBoost

In many situations we're faced with multiple expert opinions. How should we combine them together into one opinion, hopefully better than any single opinion? I'll demonstrate the derivation of a classifier I'll call NaiveBoost. We'll start with a simple situation, and later gradually introduce more complexity. Let each expert state a yes or no opinion in response to a yes/no question (binary classifiers), each expert be independent of the other experts and assume expert \(i\) is correct with probability \(p_i\). We'll also assume that the prior distribution on whether the correct answer is yes or no to be uniform, i.e. each occurs with probability 0.5. Label a "yes" as +1, and "no" as -1. We ask our question, which has some unknown +1/-1 answer \(L\), and get back a set of responses (labels) \(S = \{L_i \}\), where \(L_i\) is the response from expert \(i\). Observe we have \[ \Pr(S | L=+1) = \prod_{i} {p_i}^{\frac{L_i+1}{2}} \cdot {(1-p_i)}^\

Simplified Multinomial Kelly

Here's a simplified version for optimal Kelly bets when you have multiple outcomes (e.g. horse races). The Smoczynski & Tomkins algorithm, which is explained here (or in the original paper): https://en.wikipedia.org/wiki/Kelly_criterion#Multiple_horses Let's say there's a wager that, for every $1 you bet, will return a profit of $b if you win. Let the probability of winning be \(p\), and losing be \(q=1-p\). The original Kelly criterion says to wager only if \(b\cdot p-q > 0\) (the expected value is positive), and in this case to wager a fraction \( \frac{b\cdot p-q}{b} \) of your bankroll. But in a horse race, how do you decide which set of outcomes are favorable to bet on? It's tricky, because these wagers are mutually exclusive i.e. you can win at most one. It turns out there's a simple and intuitive method to find which bets are favorable: 1) Look at \( b\cdot p-q\) for every horse. 2) Pick any horse for which \( b\cdot p-q > 0\) and mar

Notes on Setting up a Titan V under Ubuntu 17.04

I recently purchased a Titan V GPU to use for machine and deep learning, and in the process of installing the latest Nvidia driver's hosed my Ubuntu 16.04 install. I was overdue for a fresh install of Linux, anyway, so I decided to upgrade some of my drives at the same time. Here are some of my notes for the process I went through to get the Titan V working perfectly with TensorFlow 1.5 under Ubuntu 17.04. Old install: Ubuntu 16.04 EVGA GeForce GTX Titan SuperClocked 6GB 2TB Seagate NAS HDD + additional drives New install: Ubuntu 17.04 Titan V 12GB / partition on a 250GB Samsung 840 Pro SSD (had an extra around) /home partition on a new 1TB Crucial MX500 SSD New WD Blue 4TB HDD + additional drives You'll need to install Linux in legacy mode, not UEFI, in order to use Nvidia's proprietary drivers for the Titan V. Note that Linux will cheerfully boot in UEFI mode, but will not load any proprietary drivers (including Nvidia's). You'll need proprietary d

Solving IMO 1989 #6 using Probability and Expectation

IMO 1989 #6: A permutation \(\{x_1, x_2, \ldots , x_m\}\) of the set \(\{1, 2, \ldots , 2n\}\), where \(n\) is a positive integer, is said to have property \(P\) if \( | x_i - x_{i+1} | = n\) for at least one \(i\) in \(\{1, 2, ... , 2n-1\}\). Show that for each \(n\) there are more permutations with property \(P\) than without. Solution: We first observe that the expected number of pairs \(\{i, i+1\}\) for which \( | x_i - x_{i+1} | = n\) is \(E = 1\). To see this note if \(j\), \( 1 \leq j \leq n\), appears in position \(1\) or \(2n\) it's adjacent to one number, otherwise two. Thus the probability it's adjacent to its partner \(j+n\) in a random permutation is \[\begin{equation} \eqalign{ e_j &= \frac{2}{2n}\cdot \frac{1}{2n-1} + \frac{2n-2}{2n}\cdot \frac{2}{2n-1} \\ &= \frac{2(2n-1)}{2n(2n-1)} \\ &= \frac{1}{n}. } \end{equation}\] By linearity of expectation we overall have the expected number of \(j\) adjacent to its partner \(j+n\) is \(\sum_{j=1}^{