Wednesday, June 15, 2011

Bayes Rule

Bayes rule, given below, relates two conditional probabilities. It is one of the most common and powerful ingredient of algorithms in computer vision as well as in many other fields


In any problem, we may want to predict A (it may be a discrete class label or some continuous value) given the observed data B (e.g. our image, or some features calculated from it). What Bayes rule allows us to do is frame this problem in terms of B given A and Prior (or often called Marginal) probabilities of A and B. These distributions could often be learned from training data allowing us to make the prediction.

Bayes rule also helps us understand the difference between a generative vs. discriminative learner. If we somehow learn the probability P(A|B) directly then what we have is a discriminative classifier where we can predict the values for A given B. If instead we learn the probabilities P(B|A) and P(A), then what we have the joint distribution P(A,B) = P(B|A)P(A) and hence a generative classifier.

No comments:

Post a Comment