- Get link
- X
- Other Apps
In my previous post I looked at how a group of experts may be combined into a single, more powerful, classifier which I call NaiveBoost after the related AdaBoost . I'll illustrate how it can be used with a few examples. As before, we're faced with making a binary decision, which we can view as an unknown label L∈{+1,−1}. Furthermore, the prior distribution on L is assumed to be uniform. Let our experts' independent probabilities be p1=0.8,p2=0.7,p3=0.6 and p4=0.5. Our combined NaiveBoost classifier is C(S)=∑iLi2log(pi1−pi), where S={Li}. A few things to note are that log(pi1−pi) is logit(pi), and an expert with p=0.5 contributes 0 to our classifier. This latter observation is what we'd expect, as p=0.5 is random guessing. Also, experts with probabilities pi and pj such that \( p_i = 1 - p_...