Bayesian ML calculate explicit probabilities, and are the the most practical for certain type of learning problems.
Useful for understanding probabilities that do not explicitly manipulate probabilities. Bayesian analysis justifies key design choices in ML algorithms.
Alternative error function: cross entropy.
New instances classified by combining the predictions of multiple hypotheses, weighted by their probabilities.
Bayesian methods can provide a standard of optimal decision making against other practical methods.
Require the probabilities to be known in advance, else they are estimated based on background knowledge.
Family of simple probabilisitic classifiers based on applying Baye’s theorem with strong independence assumptions between the features.
Finding the most probable classification given the training data.
Randomly selecting hypothesis based on existing prob distribution, then classifying based on that hypothesis.
Each instance x is described by a conjugation of input attributes. Target function can take on any value from finite set V. Learner asked to predict target value/classification of new instance.
Predict target value taht will maximize the probability of classification of this instance.