- Get link
- X
- Other Apps
In my previous post I looked at how a group of experts may be combined into a single, more powerful, classifier which I call NaiveBoost after the related AdaBoost . I'll illustrate how it can be used with a few examples. As before, we're faced with making a binary decision, which we can view as an unknown label \( L \in \{ +1, -1 \}\). Furthermore, the prior distribution on \( L \) is assumed to be uniform. Let our experts' independent probabilities be \( p_1 = 0.8, p_2 = 0.7, p_3 = 0.6\) and \(p_4 = 0.5\). Our combined NaiveBoost classifier is \[ C(S) = \sum_i \frac{L_i}{2}\log{\left( \frac{p_i}{1-p_i}\right)},\] where \( S = \{ L_i \} \). A few things to note are that \( \log{\left( \frac{p_i}{1-p_i}\right)} \) is \( {\rm logit}( p_i )\), and an expert with \( p = 0.5 \) contributes 0 to our classifier. This latter observation is what we'd expect, as \( p = 0.5 \) is random guessing. Also, experts with probabilities \( p_i \) and \( p_j \) such that \( p_i = 1 - p_...