First
Approach to Internal Consistency

 Consider
a test of 100 items randomly divided into Form A and Form
B.
 Consider
the testretest form of reliability:
 Form
A is administered at time t
 Form
B is administered shortly thereafter at time
t+1.
 The
correlation between performance on Form A and Form B
is a measure of the test's reliability.
 Assume
now that respondents take the tests in both forms, A and
B, at the same sitting.
 The
correlation between performance on Form A and Form B
is still a measure of its reliability.
 Assume
instead that the 100 items are not divided into Form A
and Form B, but the random items are interspersed in
the same test and numbered as odd and
even.
 The
correlation between the odd and even items is still a
measure of the test's reliability.
 This
is called the "splithalves" technique in assessing
internal consistency, and it has several forms:
 Performance
on the first 50 items could be correlated with
performance on the last 50.
 Performance
on the first 25 and last 25 could be correlated with
performance on the middle 50.
 And
so on.
 G.F.
Kuder and M.W. Richardson (1937) developed various
reliability formulas to apply to such "parallel
tests."
 KuderRichardson
Formula 20 was shown by L.J. Cronbach
(1951) to produce the mean correlation of all possible
ways of splitting a test into two halves.
 Since
then, the formula has been known as Cronbach's
alpha.


 Where:
 k = number of indicators
 r = item intercorrelations in one
offdiagonal, so it is
the sum of the intercorrelations.

As the average intercorrelations among the items (r)
increases, and as the number of items (k) increases,
the value of alpha increasesso the more reliable
the scale.
Programs that calculate scale reliability
commonly employ Cronbach's alpha.
