Similar to the central limit theorem, maximum likelihood estimates are asymptotically multivariate normal provided that the model is smooth and log-concave.

Theorem (Asymptotic normality of the MLE): Suppose conditions (A)-(D) are met. Then the maximum likelihood estimator \(\bth\) satisfies

\[\sqrt{n}(\bth - \bt^*) \inD \Norm(\zero, \fI(\bt^*)^{-1}).\]

Proof: For the MLE, \(\u(\bth)=0\). Thus, noting that the MLE is consistent under conditions (A)-(D), the Taylor series expansion of the score yields

\[\tfrac{1}{\sqrt{n}} \u(\bts) = \left[ \fI(\bts) + o_p(1) \right] \sqrt{n}(\bth-\bts).\]

The term on the left converges in distribution to \(N(\zero, \fI(\bts))\), while the term in the square brackets converges to \(\fI(\bts))\). Thus, by a minor extension of Slutsky’s theorem, we have

\[\as{ \sqrt{n}(\bth-\bts) &\inD \fI(\bts)^{-1} N(\zero,\fI(\bts)) \\ &=_d N(\zero, \fI(\bts)^{-1}) }\]