survey data, we remind them that all the impressionistic data are tagged with an adequacy-confidence code of 3, which should allow easy separation of soft data from the hard. Note, however, that, if the relevant number of cases for coding a party from sample survey data fall below 15, the data were judged "barely adequate." A variable based on such a small N was also given the lowest AC code, its "hard data" character notwithstanding.
Because of our interest in coding as many parties as possible on attraction, concentration, and reflection, we also had to depart occasionally from our conceptualization of party "support" as the psychological identification with the party by members of a social group. Conceptualizing support in this way focuses attention on the party as an institution while downplaying the interaction of party with candidates and issues in particular election campaigns. The questions used to tap party support varied from survey to survey--asking which party the respondent "preferred," to which he "felt closest," and so on. If the only survey for a country lacked such an item on party support but did have one on voting behavior (choice) in the last election, we used the item and the survey instead of substituting our impressionistic judgments of "support" in the original sense of our concept.
Another departure from the conceptualization of party support, however, may be more debatable. Questions on party support in the sense of psychological preference were obviously not appropriate for single-party countries, for which survey data were rarely available anyway. When measuring the social bases of ruling parties in single-party systems, our conceptualization changed from that of party preference or support to that of party membership. We divided the population into party members and nonmembers and computed percentages accordingly for the groups within our cultural differentiators.
Except for single-party systems and countries with large proportions of the population uninvolved in politics which had to be coded impressionistically, we limited our assessment of party support patterns only to the politically oriented segment of a society-for example, to those who reported some party preference in response to the survey item rather than refusing to answer or saying "don't know." There were some hazards in this course of action, but exclusion of the don't knows, undecideds, and no answers seemed preferable to inclusion for two reasons. (1) The exclusion adjusted for the different response rates across countries which hampered comparisons of preference percentages, and (2) the exclusion deflated otherwise misleadingly large sample sizes, swelled by many respondents who did not provide useful data. As a result of excluding those who did not report any party preferences from the computation of percentages of party support, we often reduced significantly the original sample size to an "effective" sample size for our assessment of party support.
Our effective sample size for data analysis was often further reduced by those who failed to give useful responses to cultural differentiator items. For example, the effective sample was usually reduced considerably for "income" items, as many respondents failed to divulge that information. For occupation, alternatively, the effective sample was substantially reduced when the inquiry was made of the respondent rather than the head of the house and "housewives" were eliminated from the percentages. The loss of respondents from the analysis may vary considerably for cultural differentiators, depending on the country and the nature of the available survey items.
In addition to the problems already raised and implied in assessing party support across nations, there are other problems associated with the measurement of party support within the same nation. Suppose that one has two surveys, of apparently comparable quality, for the same year for the same country. Which should be used? If occupation, income, and social status items are all available on the same survey, which should be used in computing the final party support scores? Would it make any difference in the measurement of attraction, concentration, or reflection if the urban-rural variable were collapsed into four categories rather than computing it on the basis of the original six? Should "minor" social groups be included in the assessment; would it make any difference in scoring the American parties on religious attraction, concentration, and reflection if based only on Protestants and Catholics, excluding Jews?
Identifying these many decision points in treating the data to be entered into precise formulas for the measurement of attraction, concentration, and reflection raises the issue of measurement reliability. If party scores on these variables are highly sensitive to the particular data sets emerging from alternative decisions, then the whole process is arbitrary and meaningless. Different researchers, working with different surveys or even the same surveys-might produce scores that vary greatly for the same parties and cultural differentiators.
We have addressed this issue by scoring our parties using many different data sets and computing correlations between alternative data sources for our measures of attraction, concentration, and reflection. These alternative data sets were derived primarily from sample surveys. In some instances, they were fashioned from the same items asked in different surveys. In others, they came from alternative variables within the same survey or from alternative categorizations of the same variables. Where possible, alternative formulations were constructed from election returns or other hard ecological