2008年3月10日 星期一

[Reading] Lecture 04 - Nonlinear Dimensionality Reduction by Locally Linear Embedding

The idea used in this paper is very simple. Imagine that the feature points are scattered on the manifolds (locally linear patch), that is, a feature point can be described (linear combination) by its neiborhoods. Under this assumption, the low-dimensional description (weights) of each feature point is firstly obtained by searching its kNN and minimizing an error function. Given those weights, choose y (less than d) d-dimensional coordinates to minimize the sum of the reconstruction cost for every feature point.

My idea:
I think a major trick used in LLE is that the (rough) low-dimensional descriptions are found first. By this way, some information (local relationship in LLE) are preserved when the embedding coordinates are computed. My point is: If we obtain the weights by other methods (e.g., choose the neighbors that belong to the same class instead of using kNN), something other than local relationship (e.g., in-class relationship) can be preserved after the dimension reduction.

Reference:
S. T. Roweis and L. K. Saul, "Nonlinear Dimensionality Reduction by Locally Linear Embedding." Science, Vol. 290, pp. 2323-2326, 2000.

沒有留言: