PSYB32H3 Chapter 3: (a) Psychological and Biological Assessment

146 views6 pages
CHAPTER 3
3.1 - Reliability and Validity in Assessment
Reliability = The extent to which a test, measurement, or classification system produces
the same scientific observation each time it is applied.
o Inter-rater reliability = The relationship between the judgements that at least two
raters make independently about a phenomenon. → how similar the data
collected by different raters are/ two judges agree about an event.
o Testretest reliability = The extent to which people being observed twice or
taking the same test twice score in generally the same way. (ex. Intelligence
testing)
o Alternate-form reliability = The extent to which scores on two forms of a test are
consistent.
o Internal consistency reliability = The degree to which items on a test are related
to one another.
o In each of these types of reliability, a correlationa measure of how closely two
variables are relatedis calculated between raters or sets of items. The higher
the correlation, the better the reliability.
Validity is the extent to which a measure fulfills its intended purpose.
o validity is related to reliability: unreliable measures will not have good validity
An unreliable measure does not yield consistent results, an unreliable
measure will not relate very strongly to other measures
Ex. an unreliable measure of coping is not likely to relate well to how a
person adjusts to a stressful life experience.
o Content validity = The extent to which a measure adequately samples the
domain of interest. → measure represents all facets of a given construct
Ex. measure of life stress that consists of a list of 43 life experiences.
Respondents indicate which of these experiences for ex., losing one’s job
they have had in some time period. Content validity would be high if most
stressful events that people experience are captured by this list.
o Criterion validity = The extent to which a measure is associated in an expected
way with some other measure (the criterion).
criterion validity can be referred to as concurrent validity where both
variables measured at the same point in time
Ex. a measure of the distorted thoughts in depression → Criterion
validity for this test could be established by showing that the test is
actually related to depression; that is, depressed people score
higher on the test than do non-depressed people
criterion validity can be assessed by evaluating the measure’s ability to
predict some other variable that is measured in the future; this kind of
criterion validity is often referred to as predictive validity.
Ex. IQ tests were developed to predict future school performance.
o Construct validity = The extent to which scores or ratings on an assessment
instrument relate to other variables or behaviours according to some theory or
hypothesis → test measures what it claims, or purports, to be measuring (claims
a theory/hypothesis)
A construct is an inferred attribute, such as anxiousness or distorted
cognition, that a test is trying to measure.
Ex. anxiety-proneness questionnaire → The construct validity question is
whether the variation we observe between people on a self-report test of
anxiety proneness is really due to individual differences in anxiety
Unlock document

This preview shows pages 1-2 of the document.
Unlock all 6 pages and 3 million more documents.

Already have an account? Log in
proneness. People vary in their willingness to admit to undesirable
characteristics such as anxiety proneness; thus, scores on the
questionnaire will be partly determined by this characteristic as well as by
anxiety proneness itself.
Construct validity is evaluated by looking at a variety of data from multiple
sources. For example, people diagnosed with anxiety disorder and people
without could be compared on their scores on anxiety proneness
construct validity if the people with anxiety disorders scored higher
than a control group.
Ex. if the measure has good construct validity, we would expect scores of
clients with anxiety disorders to become lower after a course of a therapy
that is effective in reducing anxiety
o Case validity = The extent to which the formulation of a case accurately
encompasses the multiple influences that contribute to distress and dysfunction.
3.2 - Psychological Assessment
Clinical interview = A conversation between a clinician and a client that is aimed at
determining diagnosis, history, causes for problems, and possible treatment options.
The paradigm within which an interviewer operates influences the type of information
sought, how it is obtained, and how it is interpreted.
o A psychoanalytically trained clinician can be expected to inquire about the
person’s childhood.
o The behaviourally oriented clinician is likely to focus on current environmental
conditions that can be related to changes in the person’s behaviour →
concentrate more on what can be observed
o Psychodynamic clinicians assume that people entering therapy usually are not
even aware of what is truly bothering them
Structured interview = An interview in which the questions are set out in a prescribed
fashion for the interviewer. Assists professionals in making diagnostic decisions based
upon standardized criteria. → based on DSM
o The SCID is a branching interview; that is, the client’s response to one question
determines the next question that is asked.
Evidence-based assessment = The selection of assessment measures based on
research evidence attesting to the reliability and validity of the measures and reading
level required. The concern is that many clinicians opt for measures that have less
research support.
o Numerous problems undermining clinical assessment in actual settings were
identified including: (1) the continuing proliferation and predominance of the
unstructured clinical interview; (2) the low reliability and validity of unstructured
clinical interviews; (3) suggestions that very low numbers of clinicians adhere to
best practice assessment guidelines; and (4) the relatively rare use of
assessment in formal treatment monitoring by clinicians
Psychological tests = Standardized procedures designed to measure a person’s
performance on a particular task or to assess his or her personality.
o Statistical norms for the test can thereby be established as soon as sufficient
data have been collected.
Standardization = The process of constructing an assessment procedure
that has norms and meets the various psychometric criteria for reliability
and validity.
o Responses of particular person can then be compared with the statistical norms
Unlock document

This preview shows pages 1-2 of the document.
Unlock all 6 pages and 3 million more documents.

Already have an account? Log in

Document Summary

3. 1 - reliability and validity in assessment: reliability = the extent to which a test, measurement, or classification system produces the same scientific observation each time it is applied. Inter-rater reliability = the relationship between the judgements that at least two raters make independently about a phenomenon. Intelligence testing: alternate-form reliability = the extent to which scores on two forms of a test are consistent. Internal consistency reliability = the degree to which items on a test are related to one another. In each of these types of reliability, a correlation a measure of how closely two variables are related is calculated between raters or sets of items. Measure represents all facets of a given construct: ex. measure of life stress that consists of a list of 43 life experiences. Respondents indicate which of these experiences for ex. , losing one"s job they have had in some time period. Assists professionals in making diagnostic decisions based upon standardized criteria.

Get access

Grade+
$40 USD/m
Billed monthly
Grade+
Homework Help
Study Guides
Textbook Solutions
Class Notes
Textbook Notes
Booster Class
10 Verified Answers
Class+
$30 USD/m
Billed monthly
Class+
Homework Help
Study Guides
Textbook Solutions
Class Notes
Textbook Notes
Booster Class
7 Verified Answers

Related Documents