Textbook Notes (363,452)
Canada (158,372)
Psychology (9,573)
PSYB01H3 (585)
Anna Nagy (283)
Chapter 5-7

Chapter 5-7 Textbook Notes

16 Pages
Unlock Document

University of Toronto Scarborough
Anna Nagy

Chapter 5 The most common measurement strategy is to ask people to tell you about themselves Rate your overall happiness You can also directly observe behaviours How many mistakes did someone make Psychological and neurological responses can be measured as well. Example: heart rate, muscle tension Reliability of measures Reliability refers to the consistency or stability of a measure of behaviour a reliable test would yield the same result each time the results should not fluctuate from one reading to the next if there is fluctuation there is an error in the measurement device Every measurement has two components: 1. True score: the real score on the variable 2. Measurement error Example: If you administer a highly reliable test multiple times, the scores on them might be 97-103; however if you used an unreliable test the scores might be 85- 115. The measurement error in the unreliable test is revealed in the greater variability shown in the scores Using unreliable measurements is a waste of time because the results will be unstable and are unable to be replicated We can assess the stability of measures using correlation coefficients o The most common correlation coefficient when discussing reliability is the Pearson product-moment correlation coefficient Symbolized as r Range from 0.00 to +1.00 and 0.00 to -1.00 0.00 means that the variables are not related at all +1.00 means that there is a positive relationship While -1.00 means there is a negative relationship Test-retest reliability: assessed by measuring the same individuals at two points in time o If many people have similar scores we can say that the measure reflects true scores rather then measurement error 0.80 is how high the correlation should be before we accept the measure as reliable www.notesolution.com Internal consistency Reliability The assessment of reliability using responses at only one point in time, because all items measure the same variable they should yield similar or consistent results o An indicator of internal consistency is split-half reliability Split-half reliability: this is the correlation of an individuals total score on one half of the test with the total score on the other half o The final measure will include items from both halves The combined measure will have more items and will be more reliable than either half by itself o Drawback of this is that it does not take into account each individual items role in a measures reliability. (each question on test is called an item) Cronbachs alpha: is based on individual items and is another indicator of Internal consistency Reliability o Correlates each item with every other item o The value of alpha is the average of all correlation coefficients Item-total correlations: examines the correlation between each time and the total score Since cronbachs alpha and item-total correlations look at the individual items, items that do not correlate with the other items are removed to increase reliability Interrater reliability A single rater might be unreliable but more the one will increase reliability The degree to which raters agree in their observations is interrater reliability o A commonly used indicator of interrater reliability is called Cohens kappa Reliability and accuracy of measures Accuracy and reliability are totally different o Example: A gas station pump puts the same amount of gas in your car every time, therefore the gas pump gauge is reliable. However the issue of accuracy is still open. The only way you can know the accuracy is to compare how much the pump gives you to a standard measure of a litre. Construct Validity of measures Construct validity: the adequacy of the operational definition of variables o To what extent does the operational variable reflect the true theoretical meaning of the variable o Construct validity is a question of whether the measure employed actually measures the construct it is intended to measure Indicators of construct validity Face validity: the evidence for validity is that the measure appears on the face of it to measure what it is supposed to measure. www.notesolution.com
More Less

Related notes for PSYB01H3

Log In


Don't have an account?

Join OneClass

Access over 10 million pages of study
documents for 1.3 million courses.

Sign up

Join to view


By registering, I agree to the Terms and Privacy Policies
Already have an account?
Just a few more details

So we can recommend you notes for your school.

Reset Password

Please enter below the email address you registered with and we will send you a link to reset your password.

Add your courses

Get notes from the top students in your class.