The Cramér-Rao lower bound establishes a lower bound on the variability of an estimator. It extends the information inequality to the case of repeated sampling from a distribution.

One dimension

Letting $b(\theta) = g(\theta)-\theta$ denote the bias of an estimator $\th$, and assuming we have an iid sample, then the information inequality becomes

\[\Var \th \ge \frac{(1+\dot{b}(\theta^*))^2}{n\fI(\theta^*)}\]

or, in the case of an unbiased estimator,

\[\Var \th \ge \frac{1}{n\fI(\theta^*)}\]

This bound (in either form) is known as the Cramér-Rao lower bound. Note that this is not an asymptotic result – it is an inequality that is true for all values of $n$.

Multiple dimensions

Suppose $X_i \iid p(x \vert \bts)$, with $\fI(\bts)$ positive definite. Let $\bth$ be an unbiased estimator of $\bt$. If the conditions for the information inequality are met, we have

\[\Var \bth \gge \frac{1}{n} \fI(\bts)^{-1},\]

the multiparameter Cramér-Rao lower bound.

Nuisance parameters

An important corollary of the above arises when we are estimating only a subset of $\bt$, say, $\bt_1$, with remaining parameters so-called “nuisance parameters”:

\[\Var \bth_1 \gge \left( \fI_{11} - \fI_{12} \fI_{22}^{-1} \fI_{21} \right)^{-1};\]

This is a direct consequence of applying the inverse of a partitioned matrix.