Jump to content

Bayesian inference

From Simple English Wikipedia, the free encyclopedia

Bayesian inference (/ˈbziən/ BAY-zee-ən or /ˈbʒən/ BAY-zhən)[1] is a type of statistical inference. In Bayesian inference, evidence or information is available, Bayes' theorem is used to change (or update) the probability of a hypothesis. Bayesian inference uses a prior distribution to estimate posterior probabilities. Bayesian inference is an important to statistics, mathematical statistics, decision theory, and sequential analysis. Bayesian inference is used in science, engineering, philosophy, medicine, sport, and law.

Bayes' rule

[change | change source]
A geometric visualisation of Bayes' theorem. In the table, the values 2, 3, 6 and 9 give the relative weights of each corresponding condition and case. The figures denote the cells of the table involved in each metric, the probability being the fraction of each figure that is shaded. This shows that P(A|B) P(B) = P(B|A) P(A) i.e. P(A|B) = P(B|A) P(A)/P(B) . Similar reasoning can be used to show that P(¬A|B) = P(B|¬A) P(¬A)/P(B) etc.
Contingency table
Hypothesis


Evidence
Satisfies
hypothesis
H
Violates
hypothesis
¬H

Total
Has evidence
E
P(H|E)·P(E)
= P(E|H)·P(H)
P(¬H|E)·P(E)
= P(E|¬H)·P(¬H)
P(E)
No evidence
¬E
P(H|¬E)·P(¬E)
= P(¬E|H)·P(H)
P(¬H|¬E)·P(¬E)
= P(¬E|¬H)·P(¬H)
P(¬E) =
1−P(E)
Total    P(H) P(¬H) = 1−P(H) 1

Bayesian inference figures out the posterior probability from prior probability and the "likelihood function". The likelihood function comes from a statistical model of the data. where

  • is a hypothesis that is changed by data (or evidence). There are usually many hypotheses. The point of the test is to see which hypothesis is more likely.
  • is the prior probability. It estimates the probability of a hypothesis before there is any evidence.
  • is the evidence, or data. It is any new data that is found.
  • is the posterior probability. This is what we want to know.
  • is the likelihood function.
  • is the marginal likelihood. It is the same for all possible hypotheses that are being tested. has to be greater than 0. If is 0, then you divide by zero.
[change | change source]

Further reading

[change | change source]
  • Vallverdu, Jordi (2016). Bayesians Versus Frequentists A Philosophical Debate on Statistical Reasoning. New York: Springer. ISBN 978-3-662-48638-2.
  • Clayton, Aubrey (August 2021). Bernoulli's Fallacy: Statistical Illogic and the Crisis of Modern Science. Columbia University Press. ISBN 978-0-231-55335-3.

References

[change | change source]

Other websites

[change | change source]