This process of computing the posterior distribution of variables given evidence is called probabilistic inference.
The joint probability function is: where the names of the variables have been abbreviated to G = Grass wet (yes/no), S = Sprinkler turned on (yes/no), and R = Raining (yes/no).
The model can answer questions like "What is the probability that it is raining, given the grass is wet?
Given symptoms, the network can be used to compute the probabilities of the presence of various diseases.
Formally, Bayesian networks are DAGs whose nodes represent random variables in the Bayesian sense: they may be observable quantities, latent variables, unknown parameters or hypotheses.
For example, if possible combinations of its parents being true or false.
Similar ideas may be applied to undirected, and possibly cyclic, graphs; such as Markov networks.
Then the situation can be modeled with a Bayesian network (shown to the right).
All three variables have two possible values, T (for true) and F (for false).
Efficient algorithms exist that perform inference and learning in Bayesian networks.
Bayesian networks that model sequences of variables (e.g.
In order to fully specify the Bayesian network and thus fully represent the joint probability distribution, it is necessary to specify for each node X the probability distribution for X conditional upon X's parents.