Wald-Test

The Wald test is a statistical test that has been presented in 1943 by Abraham Wald. Is an unknown parameter in the population and a given value, then it checks the hypotheses:

The problem is to determine the distribution of a suitable test statistic under the null hypothesis. The Wald test is based on the fact that the maximum likelihood estimator strives for the unknown parameters for large numbers of observations in distribution to a normal distribution. Therefore, many tests can be understood as special cases of the Wald tests.

  • 2.1 univariate case
  • 2.2 Multivariate Case

Examples

One sample Gauss test as a special case of the Wald test

If a variable is normally distributed in a population with an unknown parameter and known, then the sample mean

Also the maximum likelihood estimators for. One of the hypotheses for the one-sample Gauss test is:

And the test statistic after forest would

Thus, the one-sample Gaussian test can be considered as a special case of the Wald test.

Application in the regression analysis,

The Wald test can be checked, for example, whether one or all of the independent variables make a significant contribution to a generalized linear regression model:

Both the dependent variable and the independent variables can be binary ( categorical ) or Metric, the Wald test can then test the hypotheses:

  • Vs.. or
  • Vs.. at least one.

If the Wald test rejects the null hypothesis for one or more independent variables, then we can assume that the corresponding parameters are equal to zero, so that the variable (s) should be included in the model. If it is only to an independent variable, a t- test is used to check whether the parameter is significant. For a single parameter the result of the Wald statistic with the result of the square of the t statistic is consistent.

In the classical linear regression model (), however, such as the F-test sample is even with small (finite) accurate and therefore preferable for a known distribution of the estimator ( eg in the classical linear regression model or in the panel data analysis with fixed effects - fixed effects ).

Mathematical Background

Univariate case

From the maximum likelihood theory, we know that the maximum likelihood estimator of the unknown parameter sought in distribution for large numbers of observations from a normal distribution:

In the univariate case, the Wald test statistic for the hypotheses results vs. to

Also referred to as the standard error of the maximum likelihood estimator. Considering the squared test statistic, then:

That is, it is in large samples asymptotically chi-square distributed.

Multivariate case

In the multivariate case applies

Wherein the vector of estimation functions and refers to the asymptotic nonsingular covariance matrix of the maximum likelihood estimator. The test statistic

Is asymptotically chi-square distributed with degrees of freedom. The restriction function must be fully differentiable at this and have full rank.

Alternatives

An alternative to the Wald test provides the likelihood ratio test. Although this is computationally expensive, however, for he also shows better properties in small samples. Another alternative is the so-called Lagrange multiplier tests (LM tests see Lagrange multiplier ). Asymptotically, however, these three tests are identical.

811104
de