Wagner research group

Inference

This page is meant as a guide to interpreting the output of a Monte Carlo calculation.

Uncertainties

Let’s start with some notation. We say that $x\sim N(\mu,\epsilon)$ means that the variable $x$ is drawn from the distribution $N$ with parameters 0 and $\epsilon$. $$ N(\mu,\epsilon) = \frac{1}{\epsilon \sqrt{2\pi}} e^{-\frac{(x-\mu)^2}{2\epsilon^2}} $$

In a Monte Carlo calculation, we would like to know $\mu$. The output of the calculation gives us an average value $\bar{x}$, which is sampled from $N(\mu, \epsilon)$. From the error analysis, we know $\epsilon$ and $\bar{x}$. Therefore, if we know $\mu$, then we know that $$ \rho(\bar{x} | \mu, \epsilon) = \frac{1}{\epsilon \sqrt{2\pi}} e^{-\frac{(\bar{x}-\mu)^2}{2\epsilon^2}} $$ This is read as the probability distribution of $\bar{x}$ given $\mu$ and $\epsilon$. However, this is exactly the opposite of what we’d like to know, which is $$ \rho(\mu | \bar{x},\epsilon), $$ since we know $\bar{x}$ and $\epsilon$, but not $\mu$.

Bayes' theorem

Bayes' theorem relates $\rho(\mu | \bar{x},\epsilon)$ to $\rho(\bar{x}| \mu,\epsilon)$ as $$ \rho(\mu | \bar{x},\epsilon) = \frac{\rho(\bar{x}| \mu, \epsilon) \rho(\mu)}{\rho(\bar{x},\epsilon)}. $$ $\rho(\bar{x},\epsilon)$ is a normalization constant, and $\rho(\mu)$ is called the prior. This is the probability distribution of $\mu$ in the absence of any data.

Exercise Suppose that $\bar{x}_1$ is a random variable drawn from $N(0,\epsilon)$.
Assume that $\rho(\mu)=c$, where $c$ is a constant. What is the probability that $0\in [\bar{x}_1-\epsilon,\bar{x}_1+\epsilon ]$? You can determine this numerically or using erf. How would this change if we had some prior belief about $\mu$?

Traditional statistics

The approach of traditional statistics is to fix values of $\mu$ (the null hypothesis) and examine the probability that we would have gotten $\bar{x}$.

Exercise For example, suppose I do two Monte Carlo calculations and get $\bar{x}_1$, $\epsilon_1$ and $\bar{x}_2$, $\epsilon_2$. What is the probability that I would have gotten $x_1$ and $x_2$ if $\mu_1=\mu_2$? Note that you will actually need to define results like $x_1$ and $x_2$.

Puzzles

  • From a set of wave functions $\Psi_i$, find the wave function with the lowest energy using VMC estimation.
  • Find the minimum energy bond length given energy evaluations along the dissociation curve.

More general statistical inference/keywords

Regression

Suppose you are varying a parameter (such as bond length $a$). You compute the value $\bar{x}(a_j)$ for a set of points $a_j$. Most of the time, we want to use linear regression to analyze this sitation if at all possible.

Bootstrap

Bootstrap is a way to estimate uncertainties when normality is not known. In our work, it’s usually most useful when there is a nonlinear transformation of the output of Monte Carlo data; for example, when the data is put into an exponential or another nonlinear function.

Diagonalizing random matrices

It turns out that diagonalizing a matrix with uncertain elements is rather challenging; this is a long story that at the time of this writing we are preparing a publication on!