Textbook Notes (378,200)
CA (167,103)
UTSC (19,205)
Psychology (9,977)
PSYB01H3 (581)
Chapter 5

Chapter 5 notes

5 Pages
89 Views

Department
Psychology
Course Code
PSYB01H3
Professor
Connie Boudens

This preview shows pages 1-2. Sign up to view the full 5 pages of the document.
Chapter 5 – Measurement concepts
Reliability of measures
-Reliability : consistency or stability of a measure of behaviour
-A reliable measure does not fluctuate from one reading to the next. If the measure
does fluctuate, there is error in the measurement device
-Any measure you make comprises two components (1) a true score, which is the
real score of the variable, and (2) measurement error
-When doing research, you can measure each person only once; you can’t give the
measure 50 or 100 times to discover the true score
-studying behaviour using unreliable measures is a waste of time because the
results will be unstable and unreplicable.
-Reliability is most likely achieved when researchers use careful measurement
procedures
-We can’t directly observe the true score and error of an actual score on the
measure. But we can assess the stability of measures using correlation coefficients
-A correlation coefficient is a number that tells us how strongly two variables are
related to each other
-Most common correlation coefficient when discussing reliability is the Pearson
product-moment correlation coefficient. Pearson correlation coefficient
(symbolized as r) can range from 0.00 to +1.00 and 0.00 to -1.00. A correlation of
0.00 tells us that the two variables are not related at all. The closer a correlation is
to 1.00, either +1.00 or -1.00 the stronger the relationship
-When the correlation coefficient is positive there is a positive linear relationship –
high scores on one variable are associated with high scores on the second variable
-A negative linear relationship is indicated by a minus sign – high scores on one
variable are associated with low scores on the second variable
-To assess the reliability of a measure, we will need to obtain at least two scores on
the measure from many individuals. If the measure is reliable, the two scores
should be very similar; a Pearson correlation coefficient that relates the two scores
should be a high positive correlation
Test-Retest Reliability
-Test-retest reliability : assessed by measuring the same individuals at two points in
time
-we would have two scores for each person, and a correlation coefficient could be
calculated to determine the relationship between the first test score and the retest
score
-For most measures the reliability coefficient should probably be at least .80
-Given that test-retest reliability involves administering the same test twice, the
correlation might be high because the individual remembers how they responded
the first time. Alternate forms reliability is sometimes used to avoid this problem.
Alternate forms reliability involves administering two different forms of the same
test to the same individual at two points in time
www.notesolution.com
-Intelligence is a variable that can be expected to stay relatively constant over
time; thus, we expect the test-retest reliability for intelligence to be very high
Internal Consistency Reliability
-It is possible to assess reliability by measuring individuals at only one point in
time because most psychological measures are made up of a number of different
questions, called items
-Internal consistency reliability : the assessment of reliability using responses at
only one point in time
-Split-half reliability : correlation of an individual’s total score on one half of the
test with the total score on the other half. The two halves are created by randomly
dividing the items into two parts. Thus, the combined measure will have more
items and will be more reliable than either half by itself
-Split-half reliability is relatively straightforward and easy to calculate, even
without a computer
-One drawback is that it does not take into account each individual item’s role in a
measure’s reliability
-Another internal consistency indicator of reliability is called Cronbach’s alpha, is
based on the individual items. Here the researcher calculates the correlation of
each item with every other item (done with computer b/c a lot of correlations).
The value of alpha is the average of all the correlation coefficients
-It is possible to examine the correlation of each item score with the total score
based on all items. Such item-total correlations and Cronbach’s alpha are very
informative because they provide information about each individual item
-Items that do not correlate with the other items can be eliminated from the
measure to increase reliability
Interrater Reliability
-In some research, raters observe behaviours and make ratings or judgements
-You could have one rater make judgements about aggression, but the single
observations of one rater might be unreliable. The solution is to use at least two
raters who observe the same behaviour
-Interrater reliability : the extent to which raters agree in their observations
-A commonly used indicator of interrater reliability is called Cohen’s Kappa
Reliability and Accuracy of Measures
-Reliability tells us about measurement error but it doesn’t tell us about whether
we have a good measure of the variable of interest
Construct validity of measures
-If something is valid, it istrue in the sense that it is supported by available
evidence
-Construct validity : refers to the adequacy of the operational definition of variables
www.notesolution.com

Loved by over 2.2 million students

Over 90% improved by at least one letter grade.

Leah — University of Toronto

OneClass has been such a huge help in my studies at UofT especially since I am a transfer student. OneClass is the study buddy I never had before and definitely gives me the extra push to get from a B to an A!

Leah — University of Toronto
Saarim — University of Michigan

Balancing social life With academics can be difficult, that is why I'm so glad that OneClass is out there where I can find the top notes for all of my classes. Now I can be the all-star student I want to be.

Saarim — University of Michigan
Jenna — University of Wisconsin

As a college student living on a college budget, I love how easy it is to earn gift cards just by submitting my notes.

Jenna — University of Wisconsin
Anne — University of California

OneClass has allowed me to catch up with my most difficult course! #lifesaver

Anne — University of California
Description
Chapter 5 Measurement concepts Reliability of measures - Reliability: consistency or stability of a measure of behaviour - A reliable measure does not fluctuate from one reading to the next. If the measure does fluctuate, there is error in the measurement device - Any measure you make comprises two components (1) a true score, which is the real score of the variable, and (2) measurement error - When doing research, you can measure each person only once; you cant give the measure 50 or 100 times to discover the true score - studying behaviour using unreliable measures is a waste of time because the results will be unstable and unreplicable. - Reliability is most likely achieved when researchers use careful measurement procedures - We cant directly observe the true score and error of an actual score on the measure. But we can assess the stability of measures using correlation coefficients - A correlation coefficient is a number that tells us how strongly two variables are related to each other - Most common correlation coefficient when discussing reliability is the Pearson product-moment correlation coefficient. Pearson correlation coefficient (symbolized as r) can range from 0.00 to +1.00 and 0.00 to -1.00. A correlation of 0.00 tells us that the two variables are not related at all. The closer a correlation is to 1.00, either +1.00 or -1.00 the stronger the relationship - When the correlation coefficient is positive there is a positive linear relationship high scores on one variable are associated with high scores on the second variable - A negative linear relationship is indicated by a minus sign high scores on one variable are associated with low scores on the second variable - To assess the reliability of a measure, we will need to obtain at least two scores on the measure from many individuals. If the measure is reliable, the two scores should be very similar; a Pearson correlation coefficient that relates the two scores should be a high positive correlation Test-Retest Reliability - Test-retest reliability: assessed by measuring the same individuals at two points in time - we would have two scores for each person, and a correlation coefficient could be calculated to determine the relationship between the first test score and the retest score - For most measures the reliability coefficient should probably be at least .80 - Given that test-retest reliability involves administering the same test twice, the correlation might be high because the individual remembers how they responded the first time. Alternate forms reliability is sometimes used to avoid this problem. Alternate forms reliability involves administering two different forms of the same test to the same individual at two points in time www.notesolution.com
More Less
Unlock Document


Only pages 1-2 are available for preview. Some parts have been intentionally blurred.

Unlock Document
You're Reading a Preview

Unlock to view full version

Unlock Document

Log In


OR

Don't have an account?

Join OneClass

Access over 10 million pages of study
documents for 1.3 million courses.

Sign up

Join to view


OR

By registering, I agree to the Terms and Privacy Policies
Already have an account?
Just a few more details

So we can recommend you notes for your school.

Reset Password

Please enter below the email address you registered with and we will send you a link to reset your password.

Add your courses

Get notes from the top students in your class.


Submit