# We can use criminal proceedings to explain error in hypothesis testing. Consider

We can use criminal proceedings to explain error in hypothesis testing.
Consider the following: would it be worse to convict an innocent man (Type I error)? Or to allow a guilty man to walk free (Type II error)? Explain both in the sense of hypothesis tests and in the context of the criminal justice system.
Fisher’s (1925) text is the primary reason we use the alpha value of .05 as the threshold for determining statistical significance.
Do you think that is an appropriate value? Should it be different? Should it vary based on the context? Explain your position.
The α level (alpha level) establishes a criterion, or “cut-off”, for making a decision about the null hypothesis. The alpha level also determines the risk of a Type I Error (False Positive). α = .05 (most used), α = .01, α = .001 This critical region consists of outcomes that are very unlikely to occur if the null hypothesis is true. That is, the critical region is defined by sample values that are almost impossible – or at least extremely unlikely to be obtained. A 95% confidence level would have an alpha value of .05 and a 99% confidence level would have an alpha value of .01. See how they complement each other? As a pair, they combine to encompass the entirety of a distribution
A Type I error occurs when the sample data appear to show a difference when, in fact, there is none. In this case the researcher will reject the null hypothesis and falsely conclude there is a difference in ____ (frequencies, means, proportions, etc.). Type I errors are caused by unusual, unrepresentative samples, falling in the critical region even though there is no real difference.
The hypothesis test is structured so that Type I errors are very unlikely; specifically, the probability of a Type I error is equal to the alpha level (.05, .01, etc.). Type I Errors: The α level Also known as the Level of Significance Also determines the risk of a false positive finding The probability that a result would be produced by chance (sampling error or random error) alone Commonly used levels of significance (α) α = .05 (most used) 5% or 5 out of every 100 results would be due to chance α = .01 1% or 1 out of every 100 results would be due to chance α = .001 0.1% or 1 out of every 1000 results would be due to chance
Type II Errors A Type II error occurs when there is no significant difference observed between groups, but there is one in actuality. In this case, the researcher will fail to reject the null hypothesis and falsely conclude that the groups are equal in their____ (frequencies, means, proportions, etc.).
Type II errors are commonly the result of a very small difference. Although there is a difference, it is not large enough to show up in the research study OR there aren’t enough data to generalize the sample data to the population at-large. Type II Errors: Also known as beta error (β) Defined by the probability of false negatives An error made by accepting or retaining a false null hypothesis (H0) Stated simply, you fail to reject a false null hypothesis (H0) and claim that a significant difference does not exist when (in fact) it does exist.