2008年4月23日 星期三

[Reading] Lecture 09 - Improved Boosting Algorithms using Confidence-Rated Predictions

This paper proposes a more general AdaBoost algorithm than the one proposed by Freund and Schapire. For example, the weak hypotheses does not have to range within [-1, +1], instead, it can range over all of R. The paper provides different ways to determine alpha, and shows that Freund and Schapire's version can be derived as a special case of their method. And then, the paper gives a criterion for finding weak hypotheses, which was not an issue in the original version. A generalization error is also proposed, it removes the restriction that minimization training error is the only concern and goal. The last half part of this paper is quite important too, the authors give the ways to solve multiclass and multi-label classification problem by adaboost. In Freund and Schapire's algorithm, adaBoost can only be used to solve a binary (two-class), one label problem.

This is a good paper to help us understand adaBoost more deeply. If I have to implement adaBoost in the future (homework!?) and the performance matters, I will refer to this paper to seek for better approaches.

Reference:
Schapire and Singer, "Improved Boosting Algorithms Using Confidence-rated Predictions," Machine Learning, Vol. 37, 1999.

沒有留言: