If the expected value and variance of an estimator are known, then it is rather easy to prove convergence by combining the following two theorems.
Theorem: If \(\x_n \overset{r}{\longrightarrow} \x\) for some \(r > 0\), then \(\x_n \inP \x\).
Proof: Homework assignment.
Theorem: If \(\a \in \real^d\), then \(\x_n \inQM \a\) if and only if \(\Ex \x_n \to \a\) and \(\Var \x_n \to \zero\).
Proof: Homework assignment.
In other words, \(\bth\) is a consistent estimator of \(\bt\) if \(\Ex\bth \to \bt\) and \(\Var\bth \to \zero\).