Differences
This shows you the differences between two versions of the page.
Next revision | Previous revision | ||
data_analysis:mcmc [2020/03/23 13:27] – created prgram | data_analysis:mcmc [2025/07/07 14:12] (current) – external edit 127.0.0.1 | ||
---|---|---|---|
Line 1: | Line 1: | ||
====== mcmc ====== | ====== mcmc ====== | ||
+ | http:// | ||
+ | $ E_\pi[T(X)] = \int T(x)\pi(x) dx. $ | ||
+ | |||
+ | In Bayesian inference, we are interested in posterior mean $E(\theta|y)$ or posterior variance $Var(\theta|y)$. | ||
+ | |||
+ | One solution is to draw independent samples $ ( X^{(1)}, X^{(2)}, \cdots, X^{(N)} )$ from $\pi(x)$, then we can approximate | ||
+ | $ E_\pi[T(X)] \approx \frac{1}{N} \sum_{t=1}^N T( X^{(t) }) $ | ||
+ | |||
+ | Law of large numbers -> 위 근사는 adoptable | ||
+ | |||
+ | it is known that above approximation is still possible if we sample using a Markov chain. This is the main idea of MCMC method. | ||