false

Study Guides
(248,000)

Canada
(121,218)

Concordia University
(1,800)

Psychology
(116)

PSYC 333
(6)

Alwin Spence
(2)

Midterm

Unlock Document

Psychology

PSYC 333

Alwin Spence

Fall

Description

Psych 315 Review
Ch8 Confidence intervals, effect size and power
Confidence intervals
• Point estimate: summary of statistics from a sample that is just one number used as an
estimate of the population parameter. Never accurate, better to use interval estimate
when possible.
• Interval estimates: based on a sample statistic, provides a range of plausible values for
the population parameter. Range of sample statistics to expect if the experiment we’re
to be repeated. (confidence interval)
• Confidence intervals: provide a range of values, where the average could fall into. Range
around the mean, where you add or subtract the margin of error. Confirms findings of
hypothesis testing and adds more detail.
• No overlap=statistical difference, reject null hypothesis
• Overlap= not statistically different, fail to reject null hypothesis
Steps to calculating confidence intervals
1. Draw a pic of the distribution that will include confidence intervals
2. Indicate the bounds of the CI on the drawing
3. Determine z statistic that fall at each line marking
4. Turn z statistic back into raw means
5. Check that the CI makes sense
M= z( M + M sample
• Interpretation: we are 95% confidence that the true population mean will fall inside the
CI
Effect Size
• Measures how big the difference between the means are, unaffected by sample size
• indicates the size of a difference and is unaffected by sample size with overlap
• less overlap=big effect size
• allows standardization across studies
• Decrease in overlap if means are further apart and variation within population is smaller
• Increase in population decreases the standard error and test statistic increases
Cohen’s d
𝑀− Assesses difference between means using standard deviation instead of standard error
d=
• Small= 0.2, medium=0.5, large=0.8
• Large effect size has smaller overlap
• If greater than 1, the difference between two means is larger than one standard
deviation
Statistical Power
• The measure of the likelihood to reject the null hypothesis, given that the null is false
• The probability that we will avoid a Type II error • Factor that affect power are”
o Larger sample size
o Alpha level (high level increases power, from 0.05 to 0.01)
o One-tailed tests have more power than two-tail tests
o Decrease standard deviation
o Increase difference between means
o p level is the probability of committing a type 1 error
o want a high power so higher chance at not committing a type 2 error
Steps to calculate power
1. Determine the information needed to calculate power.
a. Population mean, population standard deviation, sample mean, sample size,
standard error (based on sample size)
2. Determine a critical z value and raw mean, to calculate power
3. Calculate power: the percentage of the distribution of the means for population 1 that
falls above the critical value
Meta-Analysis
• Meta-analysis considers many studies simultaneously.
• Allows us to think of each individual study as just one data point in a larger study.
• Goal of hypothesis testing is to reject null hypothesis that the mean effect size is 0
Steps
1. Select the topic of interest
2. Locate every study that has been conducted and meets the criteria
3. Calculate an effect size, often Cohen’s d, for every study
4. Calculate statistics and create appropriate graphs
Ch9 single-sample t test and the paired-samples t test
T Distribution
• Distribution of means when the parameters are unknown
• Estimate a population standard deviation from a sample
• Compare two samples to each other
(X -M) 2
s = å
(N -1)
standard error
s
S =
M
N
t statistic (M - µ M )
t =
S
M
• When sample size increases, s approaches and t and z become more equal
• Distributions of differences between means
• Degrees of freedom: number of scores that are free to vary when estimating a
population parameter from a sample
Single sample t test
• When we know the population mean, but not the standard deviation
• Degrees of freedom (N-1)
• Assumptions:
o Dependent variable is scale
o Do not know whether the data were randomly selected, but be cautious to
generalizing
o Do not know if population is normally distributed and not at least 30 participants
Steps
1. Identify population, distributions, assumptions
2. State the hypotheses
3. Characteristics of the comparison distribution
4. Identify critical values
a. df =N-1
5. Calculate
6. Decide
• Confidence interval for single-sample t test
o Take z score and convert both ends into raw scores
M=t(sM) + sample
• Effect size
o Use sample standard deviation instead of sample error
(M - µ)
d =
s
Paired Sample t Test
• Used to compare two means for a within-groups design, a situation in which every
participant is in both samples.
• Difference between paired-sample t test is that we must create difference scores for
every participants
• Scores in each condition
Ch10 Independent-Samples t test
Independent-samples t test
• Used to compare two means in a between-groups design • Provides that each participant is in only one condition
• Get closer to zero when sample size gets bigger
• Mean difference are no different than mean difference in population
• Error calculation: variance to calculate error pooled variance
Creating Distribution of Differences Between Means:
Steps
1. We randomly select three cards, replacing each after, selecting it, and calculate the
mean of the heights, listed on them. This is the first group
2. We randomly select three other cards, replacing each after selecting it, and calculate
their mean. This is the second group
3. We subtract the second mean from the first.
4. Steps are repeated many more times
Steps for Independent Samples t test
1. Identify the populations, distribution, and assumptions.
2. State the null and research hypotheses.
3. Determine the characteristics of the comparison distribution.
4. Determine critical values, or cutoffs.
5. Calculate the test statistic.
6. Make a decision.
Z test One sample group Distribution of means
Single-sample t test One sample group Distribution of means
Paired-sample t test Two sample groups (same Distribution of mean
participants) difference scores
Independent-samples t Two sample groups (different Distribution of differences
test participants) between means
Calculating appropriate Measure of Spread
• Calculate the corrected variance for each sample. (Notice that we’re working with
variance, not standard deviation.)
• Pool the variances. Pooling the variances involves taking an average of the two sample
variances while accounting for any differences in the sizes of the two samples. Pooled
variance is an estimate of the common population variance.
• Convert the pooled variance from squared standard deviation (that is, variance) to
squared standard error (another version of variance) by dividing the pooled variance by
the sample size, first for one sample and then again for the second sample. These are
the estimated variances for each sample’s distribution of means.
• Add the two variances (squared standard errors), one for each distribution of sample
means, to calculate the estimated variance of the distribution of differences between
means.
• Calculate the square root of this form of variance (squared standard error) to get the
estimated standard error of the distribution of differences between means
Formulae 2
2 å (X -M) (Y -M) 2
sX= sY= å 2 2 2
N -1 N -1 sdifferenceM X +s MY
2 æ dfX ö 2 æ dfY ö 2
s pooled ç ÷s X ç ÷ sY 2
è dftotaø èdftotaø differencedifference
(M XM )-(Y µX - µY ) t = M XM Y
t = s s
difference difference
Effect Size
(M -M )-( µ - µ )
d = X Y X Y
s
pooled
Ch 13 Correlation
Correlation
• Relationship between variables
• Does not imply causation
• Usually scale variables
Correlation Coefficient
• Falls between -1.00 and 1.00
• Statistic that quantifies a relation between two variables
• Indicates size by strength of coefficient, but not sign (direction – or +)
• Positive correlation: one variable goes up, the other variable goes up or one goes down
and the other goes down as well
• Negative correlation: one variable goes up, the other decreases or vice versa
Small 0.10
Medium 0.30
Large 0.50
Limitation
• Third variable problem
• Does not imply causation
Pearson Correlation Coefficient
• Can be used as descriptive statistic
o Describes the direction and strength of an association between two variables
• Can also be used as an inferential statistic
o Relies on a hypothesis test to determine whether the correlation coefficient is
significantly different from 0 (no correlation) Formula
𝛴[ 𝑋 − 𝑀 (𝑥 − 𝑀 )] 𝑦
𝑟 =
√(𝑆𝑆 𝑥(𝑆𝑆 𝑦
Sum of squares = total of (X-M ) x 2
Hypothesis testing
Null hypothesis
-There is no correlation between number of absences and exam grade
-H0: ρ = 0.
Research hypothesis
-There is a correlation between number of absences and exam grade
-H1: ρ ≠ 0.
Degrees of Freedom is always N-2
• Psychometrics: used to in the development of tests and mesures
• Psychometricians: use correlation to examine two important aspects of development of
measures. Reliability and Validity
• Reliability: measure is one that is consistent.
o Coefficient alpha: ) is an estimate of a test or measure’s reliability; sometimes
called Cronbach’s alpha.
• Validity: measures what it was designed or intended to measure. Correlation is used to
calculate validity, often by correlating a ne

More
Less
Related notes for PSYC 333

Join OneClass

Access over 10 million pages of study

documents for 1.3 million courses.

Sign up

Join to view

Continue

Continue
OR

By registering, I agree to the
Terms
and
Privacy Policies

Already have an account?
Log in

Just a few more details

So we can recommend you notes for your school.

Reset Password

Please enter below the email address you registered with and we will send you a link to reset your password.