The predictive mean matching method is also an imputation method available for continuous variables. It is similar to the regression method except that for each missing value, it imputes a value randomly from a set of observed values whose predicted values are closest to the predicted value for the missing value from the simulated regression model (Heitjan and Little, 1991; Schenker and Taylor, 1996).
Following the description of the model in the section Monotone and FCS Regression Methods, the following steps are used to generate imputed values:
New parameters and are drawn from the posterior predictive distribution of the parameters. That is, they are simulated from , , and . The variance is drawn as
|
where g is a random variate and is the number of nonmissing observations for . The regression coefficients are drawn as
|
where is the upper triangular matrix in the Cholesky decomposition, , and is a vector of independent random normal variates.
For each missing value, a predicted value
|
is computed with the covariate values .
A set of observations whose corresponding predicted values are closest to is generated. You can specify with the K= option.
The missing value is then replaced by a value drawn randomly from these observed values.
The predictive mean matching method requires the number of closest observations to be specified. A smaller tends to increase the correlation among the multiple imputations for the missing observation and results in a higher variability of point estimators in repeated sampling. On the other hand, a larger tends to lessen the effect from the imputation model and results in biased estimators (Schenker and Taylor, 1996, p. 430).
The predictive mean matching method ensures that imputed values are plausible; it might be more appropriate than the regression method if the normality assumption is violated (Horton and Lipsitz, 2001, p. 246).