data. A typical example of such alternative data sets consists of data sources 14 and 15 for West Germany, both of which pertain to party support by educational groupings. Source 14 comes from a U.S. Information Agency survey taken in 1960 with a sample size of 1,220; source 15 comes from the 1959 survey of 955 respondents reported by Gabriel Almond and Sidney Verba in The Civic Culture. Although the educational responses were trichotomized into "primary-secondary-university" in both surveys, the results understandably were not identical, and the attraction, concentration, and reflection scores for the German parties differed somewhat between the surveys. We evaluated the magnitude of the discrepancies for this pairing of data sets and for all other possible pairs of measures by computing correlation coefficients for alternative attraction, concentration, and reflection scores produced for the same parties. These correlations are regarded as reliability coefficients for our measurement of the three concepts.
The results of our reliability assessment have been both instructive and encouraging. Because we had collected a considerable amount of "extra" data with an eye toward these reliability checks, we generated a healthy total of 512 comparisons among alternative data sets over all six cultural differentiators for each of our measures of attraction, concentration, and reflection. The product-moment correlations for the measures generated from alternative data sets was .70 for attraction, .69 for concentration, and .90 for reflection (N = 512 for each). The calculation of negative scores for reflection in some instances (see the discussion of the formula) served to increase the variance and inflate the reliability. When these extreme negative scores were set to 0, the correlation was reduced to .70, which is a more accurate assessment of measurement reliability for social reflection.
On closer examination, it was determined that the reliabilities for all three measures were lower for occupation than for the other cultural differentiators. This was so because we accepted responses to "income" and "social status" items as alternative operationalizations of the "occupation" dimension. Although income and status are clearly distinct variables in their own right, they are widely regarded as surrogate measures of lifestyle and economic interests. Our cross-national study, however, indicates that the interchangeability of these variables is far from perfect and that most of the major differences in scores generated from alternative data sets (i.e., differences of a magnitude of .30 or greater) come from comparisons among occupation, income, and status responses. One typical pattern was for those who identified with communist and socialist parties to style themselves overwhelmingly as members of the "lower class" in conflict with their responses of high income and professional occupation on the alternative indicators. Another pattern was for farmer parties to lose their basis of distinctiveness on income and status items but to emerge high on concentration and low on attraction when the item pertained specifically to occupation. This finding led us to favor occupation per se for our purposes wherever possible.
Based on the logic that our measures of occupation involved the comparison of different variables and thus were not a proper test of reliability, we recomputed all our correlations based on the remaining five cultural differentiators. Because the alternative measures of occupation accounted for 233 of our total comparisons, the number of remaining comparisons dropped to 279. The comparable correlations were raised to .79 for attraction, .82 for concentration, and .78 for reflection. Our measures of the social bases of party support seem fairly stable over comparable data sources and therefore quite reliable.
Alternative data sets nevertheless always yielded competing measures for the same party and cultural differentiator. Because we do not have space to publish all the measures we computed and their underlying data sets, we had to make choices for some countries and parties about what portions of our data to publish. We selected for publication those data sets that tended to maximize four criteria: (1) highest data quality, as represented by the associated AC codes, (2) comparability in coding categories for the cultural differentiator involved, (3) lowest mean attraction score over all the parties in the country, and (4) highest mean concentration score. The data sets that met these criteria the closest were also used to compute the parties' social support scores for the machine-readable quantitative data file that is available through the Inter-University Consortium for Political and Social Research.
Our reasons for basing our selection of data on data quality and comparable coding categories are obvious; it is less clear why we chose also to select data for publication on the basis of low attraction and high concentration scores. In general, we view our efforts to score parties on social attraction and concentration as imperfect attempts to assess the distinctive nature of parties' support patterns. We expect the subtleties in many support patterns to be missed by our crude data and operationalizations of variables. Therefore--given data of comparable quality--if we find that categorizing a variable one way produces less attraction and more concentration among parties within a system than doing it another way, we suspect that the first method "gets at" the basis of distinctiveness better than the other. This is so because low attraction and concentration scores for the parties in a given country indicate that the parties differ in their support patterns.