For efficiently estimating the normal mean (
) under right censoring (threshold =
,
is known), we compare two approaches within the maximum likelihood estimation (MLE) framework. Approach I is a hierarchical MLE for which only the empirical censoring probability is utilized. Approach II is the direct MLE for which expectation-maximization (EM) algorithm is applied to all individual observations. We use discrete approximation to explain that the asymptotic variance of Approach II estimate equals the inverse Fisher information calculated from the full log-likelihood. We prove that Approach II gives a uniformly smaller asymptotic variance than Approach I and the variance ratio is a decreasing function of
. We further prove some supportive results and graphically demonstrate that EM algorithm monotonically converges to the unique MLE.