Textbook Notes
(362,882)

Canada
(158,081)

University of Guelph
(11,992)

Psychology
(3,261)

PSYC 2360
(93)

Carol Anne Hendry
(31)

Chapter 5

# Chapter 5.doc

Unlock Document

University of Guelph

Psychology

PSYC 2360

Carol Anne Hendry

Summer

Description

Chapter 5 – Measurement Concepts
We learn about behaviour through careful measurement. The most common measurement strategy is to ask people to tell you
about themselves.
Reliability of Measures
• Reliability refers to the consistency or stability of a measure of behaviour.
• Any measurement that you make can be thought of as comprising two components
• 1) a true score, which is the real score on the variable
• 2) measurement error
• An unreliable measure of intelligence contains considerable measurement error and so does not provide an accurate
indication of an individual's true intelligence
• When conducting research, you can only measure each person once. Thus, it is very important that you use a reliable
measure.
• In many areas, reliability can be increased by making multiple measures. Reliability increases when the number of
items (questions) increases
• There are several ways of calculating correlation coefficients; them most common correlation coefficient when
discussing reliability is the Pearson product-moment correlation coefficient
• The Pearson correlation coefficient (symbolized as r) can range from 0.00 to +1.00 and 0.00 ti -1.00.
• Acorrelation of 0.00 tells us that the two variables are not related at all
• The closer a correlation is to 1.00, the stronger is the relationship
• The positive and negative signs provide information about the direction of the relationship
• When the coefficient is positive, high scores on one variable are associated with high scores on the second
variable.
• When the coefficient is negative, high scores on one variable are associated with low scores on the second
variable.
Test-Retest Reliability
• Assessed by measuring the same individuals at two points in time
• ex. measuring a group of people on one day and again a week later
• If many people have very similar scores, we conclude that the measure reflects true scores rather than
measurement error
• For most measures the reliability coefficient should probably be at least .80
• The correlation might be artificially high because the individuals remember how they responded the first time
• Alternate forms reliability involves administering two different forms of the same test to the same individuals at two
points in time.
• Intelligence is a variable that can be expected to stay relatively constant over time; mood however, is expected to
change from one test period to the next.
• Obtaining two measures from the same people at two points may sometimes be difficult. Researchers have devised
methods to assess reliability without two separate assessments
Internal Consistency Reliability
• We can do this because most psychological measures are made up of a number of different questions, called items.
• Aperson's test score would be based on the total of his or her responses on all items
• Ascore is obtained by finding the total number of such items that are endorsed. Recall that reliability increases with
increasing numbers of items
• Internal consistency reliability is the assessment of reliability using responses at only one point in time. Because all
items measure the same variable, they should yield similar or consistent results
• One indicator of internal consistency is split-half reliability; this is the correlation of an individual's total score on
one half of the test with the total score on the other half. The two halves are created by randomly dividing the items
into two parts
• Easy to calculate, even without a computer
• However, does not take into account each individual item's role in a measure's reliability • Cronbach'sAlpha is based on the individual items. Here the researcher calculates the correlation of each item with
every other item. The value of alpha is based on the average of all the interitem correlation coeffs and the number of
items in the measure.
• Item-total correlations and Cronbach's alpha are very informative because they provide information about each
individual item
• Items that do not correlate with the other items can be eliminated to increase reliability
Interrater Reliability
• Raters observe behaviours and make ratings or judgements
• Arater uses instructions for making judgements about the behaviours
• To be reliable, at least two raters must observe the same behaviour
• Interrater Reliability is the extent to which raters agree in their observations
• Acommonly used indicator of interrater reliability is called Cohen's Kappa
Reliability andAccuracy of Measures
• Reliability tells us about measurement error but it does not tell us about whether we have a good measure of the
variable of interest
• Ex. If you use a shoe size measure and measure your foot in order to see how intelligent you are, it will be reliable as
it won't change from week to week but it is not an accurate measure of intelligence
Construct Validity of Measures
• If something is valid, it is “true” in the sense that it is supported by available evidence
• Construct validity refers to the adequacy of the operational definition of variables
• In terms of measurement, construct validity is a question of whether the

More
Less
Related notes for PSYC 2360