Textbook Notes (280,000)

CA (160,000)

York (10,000)

PSYC (4,000)

PSYC 3525 (5)

Josee Rivest (5)

Chapter 12

# PSYC 3525 Chapter Notes - Chapter 12: Type I And Type Ii Errors, Effect Size, Null Hypothesis

by OC69947

This

**preview**shows half of the first page. to view the full**2 pages of the document.**PSYC 2520: Introduction to Experimental Psychology

Beginning Behavioral Research: A Conceptual Primer (7th ed. 2012) Rosnow & Rosenthal

Chapter 12: Understanding p Values and Effect Size Indicators (pp. 219-225)

Null hypothesis sigficance testing (NHST)- The use of statistics and probabilities to evaluate the null hypothesis

•

Null hypothesis- there is no difference in the dependent variable between any levels of the independent variable

•

Effect size is measured using correlations (r)

○

Significance test = Size of effect x Size of study

•

Why is it important to focus not just on statistical significance?

Null hypothesis vs. alternative hypothesis

•

What is the reasoning behind null hypothesis significance testing?

Alpha (α)

Significance level

p value

The risk (or probability) of making a Type I error is called:

○

Type I error implies that he decision maker mistakenly rejected the null hypothesis when it is, in fact, true and

should not have been rejected

•

The risk of making a Type II error is called beta (β)

○

Type II error implies that the decision maker mistakenly failed to reject the null hypothesis whet it is, in fact,

false and should have been reject

•

What is the distinction between a Type I error and a Type II error?

The two-tailed pvalue is applicable when the alternative hypothesis did not specifically predict in which side (or

tail) of the probability distribution the significance would be detected

•

The one-tailed option is ignored in most cases

•

What are one-tailed and two-tailed p values?

Failure to reject the null hypothesis does not automatically imply "no effect," and therefore statistical

significance should not be confused with the presence or absence of an effect, or with the practical importance

of an obtained effect

•

The counternull statistic can provide insurance against mistakenly equating statistical nonsignificance (e.g., p>

0.05) with a zero magnitude effect

•

where r is the obtained value of the effect size

•

What is the counternull statistic?

Statistical power, defined as , refers to the probability of making a Type II error

•

A power analysis enables us to learn (a) whether there is a reasonable chance of rejecting the null hypothesis

and (b) whether we should increase the statistical power by increasing the total N

•

Given a particular estimated effect size r and a preferred level of power, we can use Table 12.4 to determine

how large the total N must be to allow direction of the effect at p= 0.05 two-tailed

•

What is the purpose of doing a power analysis?

The advantage of using more than one effect size indicator is that different families of effect sizes give us

•

What can effect size tell us of practical importance?

Ch. 12 - Understanding p Values and Effect Size Indicators

Monday, December 17, 2012

11:13 AM

Textbook Notes Page 1

###### You're Reading a Preview

Unlock to view full version