Expectation propagation
Expectation propagation (EP) is a technique in Bayesian machine learning.[1]
EP finds approximations to a probability distribution.[1] It uses an iterative approach that uses the factorization structure of the target distribution.[1] It differs from other Bayesian approximation approaches such as variational Bayesian methods.[1]
More specifically, suppose we wish to approximate an intractable probability distribution {\displaystyle p(\mathbf {x} )} with a tractable distribution {\displaystyle q(\mathbf {x} )}. Expectation propagation achieves this approximation by minimizing the Kullback–Leibler divergence {\displaystyle \mathrm {KL} (p||q)}.[1] Variational Bayesian methods minimize {\displaystyle \mathrm {KL} (q||p)} instead.[1]
If {\displaystyle q(\mathbf {x} )} is a Gaussian {\displaystyle {\mathcal {N}}(\mathbf {x} |\mu ,\Sigma )}, then {\displaystyle \mathrm {KL} (p||q)} is minimized with {\displaystyle \mu } and {\displaystyle \Sigma } being equal to the mean of {\displaystyle p(\mathbf {x} )} and the covariance of {\displaystyle p(\mathbf {x} )}, respectively; this is called moment matching.[1]
Applications
[edit ]Expectation propagation via moment matching plays a vital role in approximation for indicator functions that appear when deriving the message passing equations for TrueSkill.
References
[edit ]- Thomas Minka (August 2–5, 2001). "Expectation Propagation for Approximate Bayesian Inference". In Jack S. Breese, Daphne Koller (ed.). UAI '01: Proceedings of the 17th Conference in Uncertainty in Artificial Intelligence (PDF). University of Washington, Seattle, Washington, USA. pp. 362–369.
{{cite book}}: CS1 maint: location missing publisher (link)
External links
[edit ]
This machine learning-related article is a stub. You can help Wikipedia by expanding it.