Textbook Notes (280,000)
CA (160,000)
UTSC (20,000)
Psychology (10,000)
PSYB32H3 (1,000)
Chapter 3

PSYB32H3 Chapter 3: PSYB32 Chapter 3 Clinical Assessment

Course Code
Konstantine Zakzanis

This preview shows pages 1-2. to view the full 6 pages of the document.
Chapter 3 Clinical Assessment
3.1 Reliability and Validity in Assessment
The concepts of reliability and validity are extremely complex, there is an entire subfield of psychology exits
primarily for their study, psychometrics.
3.1.1 Reliability
Reliability refers to consistency of measurement. While correlation is being used to measure how closely two
variables are related to each other in each of the following types of reliability, the higher the correlation, the better
the reliability
Inter-rater reliability, refers to the degree to which two independent observers or judges agree.
o e.g. baseball game, the third base umpire may or may not agree with the home plate umpires as to
whether a line drive down the left field is fair or foul
Test-retest reliability measures the extent to which people being observed twice or taking the same test
twice, perhaps several weeks or months apart, score in generally the same way.
o Note that this reliability makes sense only when the theory assume that people will not change
appreciably between testing on the variable being measured
o Thus, if the participant has been through counselling, medications and various kinds of treatment, this
reliability no longer holds true
o e.g. intelligence test
Alternate-form reliability, the extent to which scores on the two forms of the test are consistent.
o Sometimes there are two forms of tests being used, rather than giving the same test twice
o Due to the concern that people will remember the answers from the first test and aim merely to be
Internal consistency reliability assesses whether the items on a test are related to one another.
o e.g. the anxiety questionnaire containing 20 questions would expect the items to be interrelated, or
correlate with one another, if they truly tap anxiety
3.1.2 Validity
Validity is generally related to whether a measure fulfills its intended purpose. Key point regarding validity is that
validity is related to reliability: unreliable measures will not have good validity
Unreliable measures does not yield consistent results, meaning the two variables measured in the reliability
test will not correlated strongly to each other, i.e. the r value would be around zero
If the assessment test is not reliable, thus the test will be of low validity because it is not fulfilling the intented
There are several different types of validity,
Content validity refers to whether a measure adequately samples the domain of interest.
o High content validity is obtained when the all/ most of the factors contributing to the domain of
interest, e.g. some kind of disorder, are being captured by this list
o While low content validity would result if events that actually occurs are not represented
Criterion validity is evaluated by determining whether a measure is associated in an expected way with
some other measure (the criterion).
o Concurrent validity, the validity of both variables are measured at the same point in time
A type of evidence that can be gathered to defend the use of a test for predicting other outcomes
Concurrent validity is demonstrated when a test correlates well with a measure that has
previously been validated
The two measures in the study are taken at the same time; this is in contrast to predictive
validity, where one measure occurs earlier and is meant to predict some later measure
o Predictive validity, evaluating the measure ’s ability to predict some other variable that is measured in
the future
e.g. IQ test are developed to predict one's future school performance; the test with a measure of
distorted thinking could be used to predict the development of episodes of depression in the

Only pages 1-2 are available for preview. Some parts have been intentionally blurred.

Construct validity is relevant when we want to interpret a test as a measure of some characteristic or
construct that is not simply defined
o A construct is an inferred attribute that a test is trying to measure
o Just because we call our test a measure of disorder of interest, and the items seem to be about the
tendency to become pathological; it is not certain that the test is a valid measure of the disorder of
o Construct validity is evaluated by looking at a wide variety of data from multiple sources, here are
some examples
Construct validity is achieved if the people with the disorder of interest scored higher than a
control group
Construct validity is increased if the self-report measure is associated with the observational
one, e.g. observations of fidgeting, trembling, or excess sweating for the measurement of
Construct validity is proved when the scores of clients with disorder of interest to become lower
after the course of a therapy that is effective in reduce the disorder
o More broadly, construct validity is an important part of theory testing
Questions of construct validity is related to a particular theory of disorder of interest,
e.g. proneness to anxiety is caused by certain childhood experiences. We could then
obtain further evidence for the construct validity of our questionnaire by showing that it
relates to these childhood experiences
A the same, there would also be the gathering of support for the theory of disorder
3.2 Psychological Assessment
Psychological assessment techniques are designed to determine cognitive, emotional, personality, and behavioural
factors in psychopathological functioning
3.2.1 Clinical interviews
To the layperson, the word “interview” connotes a formal, highly structured conversation, but we find it useful to
construe the term as any interpersonal encounter, conversational in style, in which one person, the interviewer, uses
language as the principal means of finding out about another person, the interviewee. Characteristics of clinical interview
different from a casual conversation or a poll is the attention the interviewer pays to how the respondent
answersor does not answer questions.
The paradigm within which an interviewer operates influences the type of information sought, how it is
obtained, and how it is interpreted
o Because different paradigm would attribute different factors to the disorder, thus they would ask
different questions
While regardless of the paradigm orientation, it is critical for the clinicians to establish rapport with the client
o There must obtain trust from the client, it would be too naïve to assume the client will easily reveal
information to another, even to an authority figure with the title "doctor"
The interview can be a source of considerable information to the clinician, unquestionable importance of
interview in abnormal psychology and psychiatry
o Exactly how information is collected is left largely up to the particular interviewer and depends, too,
on the responsiveness and responses of the interviewee
Both reliability and validity may indeed be low for a single clinical interview that is conducted in an
unstructured fashion
o clinicians usually do more than one interview with a given client, and hence a self-corrective process is
probably at work Structure interview
A structured interview is one in which the questions are set out in a prescribed fashion for the interviewer, e.g.
SCID, structured clinical interview diagnosis, for some disorders of DSM-V
With adequate training of clinicians, inter-rater reliability for structured interviews is generally good
You're Reading a Preview

Unlock to view full version