The paper does not try to propose new methods to modify the core technique of SVM, instead, it gives a way to extend SVM to the cases of semi-supervised learning. SVM does classification by dividing the feature space into different regions according to the training data. If the feature space is high-dimensional and the number of training data is small (it is usually less than the dimension of feature space), you can expect that the support vector will be bias (overfitting) to the training data because it doesn't care about the test (unlabeled) data at all. This is a traditional SVM approach - 'inductive learning (IL)'.
Therefore, this paper tries to incorporate the information coming from the unlabeled data during the model training - transductive learning (TL). TSVM tries to separate not only the training data but also the test data in the feature space. However, since test data is unlabeled, their labels must be guessed and updated during the training process. This paper tells us how this can be done.
Their experiment gives a good side-information of the difference between SVM and TSVM. Evidence shows that when the number of training data is rather small (compared to the number of test data), TSVM greatly outperforms SVM. If the number of training data and test are roughly the same, their performance is roughly the same since the information coming from the test data is not much.
Reference:
T. Joachims, "Transductive Inference for Text Classification using Support Vector Machines," Proc. International Conference on Machine Learning (ICML), 1999.
Therefore, this paper tries to incorporate the information coming from the unlabeled data during the model training - transductive learning (TL). TSVM tries to separate not only the training data but also the test data in the feature space. However, since test data is unlabeled, their labels must be guessed and updated during the training process. This paper tells us how this can be done.
Their experiment gives a good side-information of the difference between SVM and TSVM. Evidence shows that when the number of training data is rather small (compared to the number of test data), TSVM greatly outperforms SVM. If the number of training data and test are roughly the same, their performance is roughly the same since the information coming from the test data is not much.
Reference:
T. Joachims, "Transductive Inference for Text Classification using Support Vector Machines," Proc. International Conference on Machine Learning (ICML), 1999.
沒有留言:
張貼留言