2008年5月7日 星期三

[Reading] Lecture 11 - Learning Low-Level Vision

This paper models some computer vision problems by the Markov random field (MRF) model. The target solution depedent to the application is modeled as the hidden states in MRF, and the input image is modeled as the observation in MRF. After the modeling, our goal is to solve the MRF model - to obtain the "most" possible states of the hidden states.

The algorithm used in this paper to solve MRF is belief-propagation. It is easy to understand the physical meaning of the algorihm, and some papers have given the proof of its property such as its ability of convergence.

They give three application examples by using MRF, the first is "super-resolution," the second is "shaing and reflectance estimation", and the last one is "motion estimation." The idea used in super-resolution is quite novel, but it is a little impractical. If the database is not huge, it's unresonable to assume the high-resolution patch can be found from the database.

The second application is smiliar to the first one, so does their weakness in the assumption. They are both ill-posed problem. Their results are very easy to be biasd by database.

I don't think anyone will adopt their method in the third application. It is impractical to sepnd so many time estimating the motion.

Reference:
William T. Freeman and Egon C. Pasztor, "Learning Low-Level Vision," ICCV, pp. 1182-1189, 1999.

沒有留言: