|
||||||||
neous cross-national comparisons of party politics, it aids in establishing the homogeneity and distinctiveness of the political "eras" under comparison. Figure 2.1 gives the time periods established for all 53 countries studied in the ICPP project. The internal division also functions as an important device to measure change within parties under the limitations of our research design. Although the ICPP project covers a basic thirteen-year time span, it does not employ a research design that is longitudinal over time. Despite the length of its time span, the project still gives a cross-sectional picture or "snapshot" of world politics at a given "point" in time. To carry the photographic analogy further, we can speak of "snapshots" with different shutter speeds. The faster the shutter speed, the less the blur of moving objects. For example, the data collection phase of sample survey research--typically described as providing a cross-sectional view or snapshot of public opinion at a given point in time--often takes two or three weeks to complete. This shutter speed allows time for events to change opinion, but the blurring is assumed to be negligible in most instances and the survey results are treated as showing public opinion at a given time, say November 1968. The shutter speed we used for most of the countries in the ICPP project can be likened to a thirteen-year time exposure. Objects or conditions that do not change during this time period should be sharp on the photograph; the more movement, the more blur. While our basic design is cross-sectional, we do provide some test of party "movement" or change in our coding procedure by scoring parties separately for the first and second "halves" of our time period, set for most of our countries arbitrarily at 1950 to 1956 and 1957 to 1962, but adjusted to fit major political developments within certain countries. Given the nature of library materials, it was felt that only a two-part division in time could be supported with available information. But at least we have some flexibility in scoring parties that change in external relations or internal organization, and we can produce some knowledge about the rapidity of change for nearly all of the variable clusters and their supporting basic variables. The "institutionalization" variable cluster--which itself involves observations over time will not be subjected to this two-part coding procedure. The most important outcome of the ICPP project is the comparative data produced on each of the 158 parties in the original sample. The strategy in assembling these data was to invest heavily in careful research and "quality control" procedures to produce data of the highest possible reliability and validity--given the limitations of working with library materials. Obviously, the literature we indexed and recorded on film varied in its adequacy for providing information with which to make coding judgments, and thus our analysts had more confidence in coding some variables than in coding others. We reflected the adequacy of the documentation underlying any given variable and our analysts' confidence in their coding judgments by accompanying each variable with an "adequacy-confidence" rating, as scored by those who did the coding. The adequacy-confidence scale was designed to reflect four factors that seem especially important in determining the researcher's belief in the accuracy or truth value of the coded variable--as well as it can be determined through library research. These factors are (1) the number of sources that provide relevant information for coding decision, (2) the proportion of agreement to disagreement in the information reported by different sources, (3) the degree of discrepancy among the sources when disagreement exists, and (4) the credibility attached to the various sources of information. The first three factors deal primarily with the "adequacy" of the literature that can be cited to document the variable code, and the fourth deals more with the analyst's confidence in coding the variable. In an effort to "objectify" our measure of the researcher's belief in the accuracy or truth value of the coded variable, we operationalized the adequacy-confidence scale primarily in terms of the first three factors: (1) number of sources, (2) proportion of agreement, and (3) degree of discrepancy. However, this operationalization was intended only to guide the researcher in arriving at his adequacy confidence rating when the fourth factor (source credibility) was constant across documents. If the credibility factor, ignored in our operationalization, interacted sufficiently with the information sources to cause the researcher to be more or less confident in his coding than the operationalization formula would suggest, then he was free to revise the adequacy-confidence rating accordingly. The credibility factor was kept out of the operationalization due to the great difficulty in fashioning an acceptable scale for a position in n-dimensional attribute space created from the several subfactors contributing to source credibility, of which three seem especially important: (1) amount of attention given to the variable in the source, (2) adequacy of the research underlying the author's observation, and (3) the integrity and objectivity attributed to the author. These three factors, and certainly others, can interact in a variety of ways to affect the researcher's evaluation of source credibility, and we did not attempt to spell out rules for handling the combinations and subtleties involved in any such evaluation. Instead, we were constrained to let source credibility operate as a subjective variable in tempering the researcher's belief in the truth value or accuracy of the variable code after reference to the more objective operationalization. In general, if the "credibility gap" between sources was not great, it was expected that the researcher would score the coding judgment according to the objective |
||||||||