Since the two papers address on the same problem and use similar techniques, the discussion of them are put together here.
About one and half years ago, I did the face recognition job on a music video, trying to find out the relationship among the characters shown in the video. Because I was very bad at paper reading, I surfed the internet to collect the information about how to do face recognition. I found a very good website about face recognition:
http://www.face-rec.org/algorithms/Many face databases and related researches can be found from this website.
If you click the link, you will find PCA is put on the top of the page. In fact, that was the reason why I used PCA to do face recognition in my research at that time. I didn't read the paper [1] before, but I used exactly the same skills (including how to reduce the complexity from solving PCA on a N2xN2 matrix to a MxM matrix, where N is the image height or width and M is the number of image) to do the job. That's because the approach is so old and classical that it had been made into tutorials by people.
The following steps describe my experiment before:
Collect some face database and do (size, illuminace, contrast) normalization on these pictures. Run PCA on these normalized pictured to get the eigenfaces, and discarded the k (I forget the exact value in my experiment) most significant pricinpal components. The reason why we have to discard them has been discussed in [2].
Calculate the eigenfaces components (project to face spaces) of each detected face. The eigenvectors used here are the ones obtained in step 1.
Choose one face by myself, and obtain the nearest neighors of that face. See if the faces are from the same person.
Since the scene often changes in a music video, the experiment shows that the lighting condition dominates the results. However, if the lighting condition is roughly the same, PCA more or less can distinguish the people. If I knew Fisherface at that time, maybe I'll try FLD to see if it can solve the lighting problem.
Summarizaton:
Paper [1]:
A face image is treated as a featrue vector. However, the dimension of an image is too large to be handled, so they do dimension reduction by PCA. An important thing mentioned in this paper is that the complexity of solveing PCA can be reduced when the number of vectors is less than the dimensions.
Paper [2]:
They argued that PCA preserves all the difference between feature vectors, which is inadequate for face recognition. FLD is more suitable because it diminish the in-class difference when it does dimension reduction.
Comments:
PCA provides a systematic way to obtain the most distinguishable vectors (principal components) from a collection of feature vectors. However, it is very sensitive to "noise" because PCA does not know what differences can be ignored when it is applied to the application of classification. FLD enlarges the difference between classes and diminishes the in-class difference when it does dimension reduction. PCA performs better for feature reconstruction; FLD is more suitable for classification.
In my opinion, we cannot say that FLD is better than PCA because their main goals are different. Besides, a prerequisite of FLD is that the data must be supervised, that is, each vector must be labeled beforehand. Conversely, PCA can be directly applied to unsupervised data.
Reference:
[1] M. Turk and A. Pentland, "Eigenfaces for Recognition." Journal of Cognitive Neuro Science, 3(1), pp. 71-86, 1991.
[2] D. J. Kriegman and J. P. Hespanha and P. N. Belhumeur, "Eigenfaces vs. Fisherfaces: Recognition Using Class-Specific Linear Projection." European Conference on Computer Vision, p. I:43-58, 1996.