2008年5月5日 星期一

[Reading] Lecture 11 - An Introduction to Graphical Models

This paper is easy to read. When I first time hear "graphical model," I surf the internet and found this paper is most suitable for a beginner like me.

The theory supported behind the graphical model is the probability theory, in other words, you can use the mathematical language to describe any graphical model. The paper starts by a example (Fig. 1) to explain how to use a graphical model to represent a simple Bayesian model. After that, many directed graphical models and undirected graphical models are introduced. We can learn from these examples how to translate our probability problem into a graphical problem.

Since how to solve a graphical model is not the main topic in this paper, the paper only introduces a few skills how to simplify the problem. The two skills "variable elimination" and "dynamic programming" is also useful when we do programming other than solve the graphical model. For example, in our homework 2, we can use variable elimination to elliminate some redundent calculations.

I cannot understand well about the rest sections of this paper, especially "Learning." I guess it is a seldom-used technique. The example given in the application is not in vogue, maybe it is because the paper is published in early days. I think using a simplier application example as the end the paper would be better.

I think what I benefit most from this paper is more understanding the relationship between different graphical models. To my surprise, Kalman filter can be translated into a graphical model too. I didn't know that before.

Reference:
Michael I. Jordan, Zoubin Ghahramani, and Tommi S. Jaakkola, "An introduction to variational methods for graphical models," Learning in Graphical Models, Kluwer Academic Publishers, 1999.

沒有留言: