Path: janda.org/c10 > Syllabus > Topics > Topic VI: Tails and Errors
VI. TESTING HYPOTHESES: TAILS AND ERRORS

OCTOBER 30
HYPOTHESIS TESTS AND
TYPES OF ERRORS

Schmidt, Ch.10: "Introduction to Hypothesis Testing: Tests with a Single Mean," only pp. 272-286.

So far in this statistics, we've not even mentioned the technical phrase, "statistically significant." The free ride is over, as you learn in this chapter. Suppose you want to determine whether one sample (e.g., students at Northwestern) differs "significantly" from a given population (e.g., all college-level students in the U.S.). Assuming you have data on all college students nationwide, that's a job for the one-sample z test. Be sure you understand these terms: test statistic, alpha, level of significance, critical region, critical value, one-tailed, and two-tailed.

Yesterday's reading assignment stopped just short of introducing Type I and Type II errors. These are hard to keep straight, and even most researchers have to think hard before explaining the difference. An analogy with diagnosing appendicitis may help. Doctors have difficulty in distinguishing various forms of severe stomach pains from appendicitis.

Type I: A doctor rejects the hypothesis that the patient has appendicitis and fails to operate. The patient did have it and died.
Type II: A doctor accepts the hypothesis that the patient has appendicitis and operates. The patient didn't have it, and the operation was unnecessary, painful, and costly.

The term alpha is attached to the significance of Type I errors and beta to Type II errors. The researcher is free to establish alpha and thus to control the probability of a Type I error. But the situation is more complicated with the probability of a Type II error, which depends on the sample size (among other factors). On pages 321-323, Schmidt explains how sample size figures into the calculation, but you need not know the calculation.


OCTOBER 31

Optional session on hypothesis testing


NOVEMBER 1
TAILS AND
PROPORTIONS

Schmidt, Ch.10: "Introduction to Hypothesis Testing: Tests with a Single Mean," only pp. 286-293.

In test statistics up to now, we've assumed knowledge of the standard deviation of the variable for the population. Because we usually don't know that, we must estimate it from the sample, which introduces some additional error. So instead of the z-test, we use a procedure called the t-test. Another complication arises when we are dealing with data expressed as percentages or proportions (e.g., % of respondents with college education) rather than as interval scales (e.g., mean years of education). Fortunately, this issue is easy to handle, as Schmidt explains.


NOVEMBER 2
SIGNIFICANCE TESTS FOR CORRELATION COEFFICIENTS

Kirk, "One-Sample t and z tests and confidence interval for a correlation," portion of Chapter 11 in Statistics: An Introduction (Harcourt Brace, 1999), pp 367-369.

Methods of statistical analysis differ across different disciplines in the social sciences. Sociology and political science, for example, make more use of correlation than does psychology, which relies more heavily on t-tests (treated this week) and analysis of variance (treated next week). Schmidt, the pyschologist who furnished most of our readings, has a chapter on correlational analysis, but he does not deal at all with the significance of a correlation coefficient. Kirk, who is also a psychologist, devotes about three pages to the topic, which fortunately is enough for us.