Textbook Notes (290,000)
CA (170,000)
UTSC (20,000)
Psychology (10,000)
PSYB01H3 (600)
Chapter 5

Chapter 5 notes


Department
Psychology
Course Code
PSYB01H3
Professor
Connie Boudens
Chapter
5

This preview shows page 1. to view the full 5 pages of the document.
Chapter 5 – Measurement concepts
Reliability of measures
-Reliability : consistency or stability of a measure of behaviour
-A reliable measure does not fluctuate from one reading to the next. If the measure
does fluctuate, there is error in the measurement device
-Any measure you make comprises two components (1) a true score, which is the
real score of the variable, and (2) measurement error
-When doing research, you can measure each person only once; you can’t give the
measure 50 or 100 times to discover the true score
-studying behaviour using unreliable measures is a waste of time because the
results will be unstable and unreplicable.
-Reliability is most likely achieved when researchers use careful measurement
procedures
-We can’t directly observe the true score and error of an actual score on the
measure. But we can assess the stability of measures using correlation coefficients
-A correlation coefficient is a number that tells us how strongly two variables are
related to each other
-Most common correlation coefficient when discussing reliability is the Pearson
product-moment correlation coefficient. Pearson correlation coefficient
(symbolized as r) can range from 0.00 to +1.00 and 0.00 to -1.00. A correlation of
0.00 tells us that the two variables are not related at all. The closer a correlation is
to 1.00, either +1.00 or -1.00 the stronger the relationship
-When the correlation coefficient is positive there is a positive linear relationship –
high scores on one variable are associated with high scores on the second variable
-A negative linear relationship is indicated by a minus sign – high scores on one
variable are associated with low scores on the second variable
-To assess the reliability of a measure, we will need to obtain at least two scores on
the measure from many individuals. If the measure is reliable, the two scores
should be very similar; a Pearson correlation coefficient that relates the two scores
should be a high positive correlation
Test-Retest Reliability
-Test-retest reliability : assessed by measuring the same individuals at two points in
time
-we would have two scores for each person, and a correlation coefficient could be
calculated to determine the relationship between the first test score and the retest
score
-For most measures the reliability coefficient should probably be at least .80
-Given that test-retest reliability involves administering the same test twice, the
correlation might be high because the individual remembers how they responded
the first time. Alternate forms reliability is sometimes used to avoid this problem.
Alternate forms reliability involves administering two different forms of the same
test to the same individual at two points in time
www.notesolution.com
You're Reading a Preview

Unlock to view full version

Only page 1 are available for preview. Some parts have been intentionally blurred.

-Intelligence is a variable that can be expected to stay relatively constant over
time; thus, we expect the test-retest reliability for intelligence to be very high
Internal Consistency Reliability
-It is possible to assess reliability by measuring individuals at only one point in
time because most psychological measures are made up of a number of different
questions, called items
-Internal consistency reliability : the assessment of reliability using responses at
only one point in time
-Split-half reliability : correlation of an individual’s total score on one half of the
test with the total score on the other half. The two halves are created by randomly
dividing the items into two parts. Thus, the combined measure will have more
items and will be more reliable than either half by itself
-Split-half reliability is relatively straightforward and easy to calculate, even
without a computer
-One drawback is that it does not take into account each individual item’s role in a
measure’s reliability
-Another internal consistency indicator of reliability is called Cronbach’s alpha, is
based on the individual items. Here the researcher calculates the correlation of
each item with every other item (done with computer b/c a lot of correlations).
The value of alpha is the average of all the correlation coefficients
-It is possible to examine the correlation of each item score with the total score
based on all items. Such item-total correlations and Cronbach’s alpha are very
informative because they provide information about each individual item
-Items that do not correlate with the other items can be eliminated from the
measure to increase reliability
Interrater Reliability
-In some research, raters observe behaviours and make ratings or judgements
-You could have one rater make judgements about aggression, but the single
observations of one rater might be unreliable. The solution is to use at least two
raters who observe the same behaviour
-Interrater reliability : the extent to which raters agree in their observations
-A commonly used indicator of interrater reliability is called Cohen’s Kappa
Reliability and Accuracy of Measures
-Reliability tells us about measurement error but it doesn’t tell us about whether
we have a good measure of the variable of interest
Construct validity of measures
-If something is valid, it istrue in the sense that it is supported by available
evidence
-Construct validity : refers to the adequacy of the operational definition of variables
www.notesolution.com
You're Reading a Preview

Unlock to view full version