Intelligence quotient

The intelligence quotient ( IQ ) is a ratio used to evaluate the performance of intellectual ( general intelligence ) or in certain service areas (eg, factors of intelligence ). It is usually determined by an intelligence test, the result is compared with that of other persons as the reference group (normalization ). Differences arising on the composition of this reference group ( total population or sub-groups according to similar age, education level and profession, etc.). The distribution of the test results of the reference group is estimated and rescaled using the IQ of a normal distribution with mean 100 and standard deviation 15. Thus have about 2/3 of people in this reference group with an IQ between 85 and 115 different, the more the IQ of 100, the less often you will find people who have a similar IQ.

In the interpretation of the IQ, the type of measurement method ( eg, type of intelligence tests ) and the underlying intelligence concept as well as the reference group used for normalization must be considered that influence the stability and generalizability of estimating the intelligence of a person.

Since the measured values ​​have increased on average over the years ( " Flynn effect"), had intelligence tests are always new normalized.

Calculations

Historically

Alfred Binet, who had in 1905 with the Binet - Simon test developed the first usable intelligence test, was the mental performance as mental age. The test consisted of tasks with increasing difficulty, which should be solved for the respective age groups as possible clear-cut. For example, an average eight year old should be developed all the duties of his age group ( and below) can solve, but not the duties of the Nine Years'. Abolished a child, not all tasks in his age group, it had a lower, also managed tasks of the older age group, it had a higher " mental age ".

William Stern put this mental age in relation to the age and invented so in 1912 the intelligence quotient. Lewis M. Terman of Stanford University developed the translated into English from the French of Goddard Simon and Binet quotient further. To remove the decimal places, he multiplied the mental age - age quotient by 100

Modern

Since the mental age increases more slowly than the age, the IQ drops constantly after the star formula. Terman recognized this problem in his development as well. To address this problem, he standardized the test for different age groups. The distribution he looked like at each age of a normal distribution. In 1937 developed Stanford -Binet test, the standard deviation varies 15 to 16 IQ points depending on age (see Valencia and Suzuki, 2000, pp. 5 ff.)

Originally just for kids, especially for school readiness testing, has developed the IQ calculation was later extended by David Wechsler by using the population-based scaling with the average 100 to adults. For today's deviation IQ scale is a valid means of 100 and a standard deviation ( SD) of 15 She finds application, for example in the Hamburg- Wechsler Intelligence Test series ( cf. Zimbardo, 1995, pp. 529 ff.)

Conversions

You can also specify other standard scales, such as the percentile rank ( percentile). About the reference to the normal distribution, the values ​​from other scales in an IQ scale with mean 100 can convert without loss of information:

In this case, mean

  • Determined scale value used in the test
  • Mean value of the scale used
  • Standard deviation of the scale used

Example

The IS -2000 using the so-called standard value scale (SW ) having an average value of 100 and a standard deviation of 10. If a person has achieved a standard value of 110, can this be converted into a standard IQ - value as follows: used:

Since 110, the default value scale one standard deviation above the mean is exactly the same must also apply to the AW- IQ score. And as calculated, this is also true with a value of 115. A Re - IQ score of 85, the standard deviation is just below the average, corresponds to an actual value of 90 (see also Linear Transformation).

414017
de