Pages

Friday 2 December 2011

Serial Correlation

Violations of Regression Assumptions include:
1 - Heteroskedasticity
2 - Serial Correlation
3 - Multicollinearity 
Heteroskedasticity has already been discussed in the previous post.


Regression errors are said to be serially correlated when they have correlation across observations. Usually this phenomenon arises in time-series analysis. Serially correlated errors are also known as auto-correlated. 


Types of Serial Correlation:
1. Positive serial correlation: A serial correlation in which positive errors for one observation increases the probability of a positive error for another observation and vice versa.
2. Negative serial Correlation: A serial correlation in which a positive error for one observation increases the probability of a negative error for another observation and vice versa.


Consequences of Serial Correlation:
  • Linear regression is an incorrect estimate of the regression coefficient standard errors.
  • When one of the independent variables is a lagged value of the dependent variable, then serial correlation causes all the parameter estimates to be inconsistent and invalid. Otherwise, serial correlation does not affect the consistency of the estimated regression coefficients.
  • Serial correlation leads to wrong inferences.
  • In case of positive serial correlation:
    • Standard errors are underestimated
    • T-statistics are inflated
    • Type-I error increases
  • In case of negative serial correlation:
    • Standard errors are overstated
    • F-statistics are understated
    • Type-II error increases
Testing for Serial Correlation
1. Plotting residuals: Scatter plots can be used with errors on the y-axis and time on the x-axis to test serial correlation. The following graphs show the relation.

2. Using Durbin-Watson test: The DW Statistic is used to test for serial correlation. It tests the null hypothesis of no auto-correlation against the alternative hypothesis of positive/negative auto-correlation. In case of large sample size Dubin-Watson statistic (d) is approximately equal to 

d = 2 (1 - r)
where,
r = sample correlation between regression residuals (errors) from one period and from the previous period.

According to the equation:
  • If no auto-correlation persists then d = 2
  • If auto-correlation is +1.0 then d = 0
  • If auto-correlation is -1.0 then d = 4
Decision Rule:
A. For positive auto-correlation, the decision rule is:
Null Hypothesis = no positive auto correlation
Alternative Hypothesis = positive auto correlation
  • If d < dl then Reject Null Hypothesis
  • If d > du then Do not reject Null Hypothesis
  • If dl ≤ d then inconclusive
B. For negative auto correlation, the decision rule is:
 Null Hypothesis = no negative auto correlation
Alternative Hypothesis = negative auto correlation
  • if d > 4 - dl then Reject Null Hypothesis
  • if d < 4 - du then Do not reject Null Hypothesis
  • if 4 - du ≤ d ≤ 4 - dl then Inconclusive
dl = Lower Range d
du = Upper Range d

No comments:

Post a Comment