This example contrasts several of the robust methods available in the ROBUSTREG procedure.
The following statements generate 1,000 random observations. The first 900 observations are from a linear model, and the last 100 observations are significantly biased in the Y direction. In other words, 10% of the observations are contaminated with outliers.
data a (drop=i); do i=1 to 1000; x1=rannor(1234); x2=rannor(1234); e=rannor(1234); if i > 900 then y=100 + e; else y=10 + 5*x1 + 3*x2 + .5 * e; output; end; run;
The following statements invoke PROC REG and PROC ROBUSTREG with the data set a
:
proc reg data=a; model y = x1 x2; run;
proc robustreg data=a method=m; model y = x1 x2; run;
proc robustreg data=a method=mm seed=100; model y = x1 x2; run;
proc robustreg data=a method=s seed=100; model y = x1 x2; run;
proc robustreg data=a method=lts seed=100; model y = x1 x2; run;
The tables of parameter estimates that are generated by using M estimation, MM estimation, S estimation, and LTS estimation in the ROBUSTREG procedure are shown in Output 86.1.2, Output 86.1.3, Output 86.1.4, and Output 86.1.5, respectively. For comparison, the ordinary least squares (OLS) estimates that are produced by the REG procedure (see Chapter 85: The REG Procedure) are shown in Output 86.1.1. The four robust methods, M, MM, S, and LTS, correctly estimate the regression coefficients for the underlying model (10, 5, and 3), but the OLS estimate does not.
The next statements demonstrate that if the percentage of contamination is increased to 40%, the M method and the MM method with default options fail to estimate the underlying model. Output 86.1.6 and Output 86.1.7 display these estimates. However, by tuning the constant c for the M method and the constants INITH and K0 for the MM method, you can increase the breakdown values of the estimates and capture the right model. Output 86.1.8 and Output 86.1.9 display these estimates. Similarly, you can tune the constant EFF for the S method and the constant H for the LTS method and correctly estimate the underlying model by using these methods. Results are not presented.
data b (drop=i); do i=1 to 1000; x1=rannor(1234); x2=rannor(1234); e=rannor(1234); if i > 600 then y=100 + e; else y=10 + 5*x1 + 3*x2 + .5 * e; output; end; run;
proc robustreg data=b method=m; model y = x1 x2; run;
proc robustreg data=b method=mm; model y = x1 x2; run;
proc robustreg data=b method=m(wf=bisquare(c=2)); model y = x1 x2; run;
proc robustreg data=b method=mm(inith=502 k0=1.8); model y = x1 x2; run;
When there are bad leverage points, the M method fails to estimate the underlying model no matter what constant c you use. In this case, other methods (LTS, S, and MM) in PROC ROBUSTREG, which are robust to bad leverage points, correctly estimate the underlying model.
The following statements generate and analyze 1,000 observations, 1% of which are bad high-leverage points.
data c (drop=i); do i=1 to 1000; x1=rannor(1234); x2=rannor(1234); e=rannor(1234); if i > 600 then y=100 + e; else y=10 + 5*x1 + 3*x2 + .5 * e; if i < 11 then x1=200 * rannor(1234); if i < 11 then x2=200 * rannor(1234); if i < 11 then y= 100*e; output; end; run;
proc robustreg data=c method=mm(inith=502 k0=1.8) seed=100; model y = x1 x2; run;
proc robustreg data=c method=s(k0=1.8) seed=100; model y = x1 x2; run;
proc robustreg data=c method=lts(h=502) seed=100; model y = x1 x2; run;
Output 86.1.10 displays the MM estimates with initial LTS estimates, Output 86.1.11 displays the S estimates, and Output 86.1.12 displays the LTS estimates.