# HSCI 307 Lecture Notes - Lecture 8: Face Validity, Content Validity, Convergent Validity

42 views2 pages
Published on 24 Jan 2018
School
Department
Course
Professor
Lecture 8 Measurement
Understanding the type of data collected
…the appropriateness of methods, analysis…
Measurement process
Conceptualization
Operationalization
.
Conceptual definition: ~ in abstract,
theoretical terms
Operational definition
Why important? They allow for
replication - measurable
Development of specific research
procedures that will result in
empirical observations
Levels of measurement
See notebook
Reliability and Validity
Perfect reliability and validity are virtually impossible to achieve
"the ability of a measuring instrument to give consistent results on repeated trials"
Test=retest reliability: ~ across time
Inter-rater reliability: independent evaluations conducted by different individuals
Parallels forms reliability : ~ across indicators (diff. versions - same result)
Internal consistency: 内部一致性 assess the degree to which the items in the
scale are correlated with one another
4 forms of reliability
Conceptualization
Increase level of measurement: categorical -> continuous
Multiple indicators (parallels)
Pretest, pilot studies, and replication
Improve reliability
Reliability
The accuracy we can place on the inferences we make about people based on their
score from that scale
Convergent validity: used for multiple indicators based on the idea indicators of one
construct will act alike or converge 集中
Discriminant validity: used for multiple indicators based on the idea indicators of
different constructs diverge 分岔
Validity
A test CANNOT be valid if it’s unreliable.
Measure 2 concepts: verbal and math ability
Written, paper & pencil test, or
2 measures (indicators):
Same measures - by estimating the reliability of the written test through a test-retest,
parallel forms, or an internal consistency measure.
Compare [verbal - written test] with [verbal - written test] - same concept
Same concept - convergent validity
Compare [verbal - written test] with [verbal - teachers opinion] - different measures
Different concept - discriminant validity
Compare [verbal - written test] with [math - written test] - same measures
Different concept & Different measures - Very discriminant
Compare [verbal - written test] with [math - teacher's opinion]
Face validity
A type of measurement validity in which an indicator "make sense" as a measure of a
construct in the judgement of others, especially in the scientific community
Content validity
Measurement validity requires that a measure represent all the aspects of the conceptual
definition of a construct
307 final review
201747
22:50
New Section 1 Page 1
Unlock document

This preview shows half of the first page of the document.
Unlock all 2 pages and 3 million more documents.

## Document Summary

Process of thinking through the various meanings of the concept. Development of specific research procedures that will result in empirical observations. Perfect reliability and validity are virtually impossible to achieve. "the ability of a measuring instrument to give consistent results on repeated trials" Inter-rater reliability: independent evaluations conducted by different individuals. Parallels forms reliability : ~ across indicators (diff. versions - same result) Internal consistency: assess the degree to which the items in the scale are correlated with one another. The accuracy we can place on the inferences we make about people based on their score from that scale. Convergent validity: used for multiple indicators based on the idea indicators of one construct will act alike or converge . Discriminant validity: used for multiple indicators based on the idea indicators of different constructs diverge . A test cannot be valid if it"s unreliable. Compare [verbal - written test] with [verbal - written test] - same concept.