Week 2 09/18/2013
Chapters: 2 and 11
Variable: an attribute or characteristic that may vary over time or from case to case.
Example: gender, income, etc.
Independent: variable that has, or assumed to have, a causal impact on a dependent variable.
Dependent: variable that is caused, or assumed to be caused, by an independent variable.
Whether concepts (poverty, prejudice, religiousness) are reliable.
Would the same results exist if the same measurement techniques were used several times to the same
A study is replicable if others are able to repeat it and get the same results.
Integrity of the conclusions generated by a piece o research.
Three types of validity:
1. Measurement/construct validity
Applies to quantitative research. Measures social concepts.
• Whether a particular indicator actually does measure what it is
supposed to measure.
o Do IQ tests really measure intelligence > whether IQ
tests are a valid measure of intelligence.
2. Internal validity
♦ Can you differentiate between cause and effect? Did the cause create
3. External validity ♦ The study’s findings are applicable in settings outside the research
environment – everyday settings.
• aturalistic: Naturalism: style of research designed to
minimize disturbance to the natural/everyday social world.
Minimize the artificial methods of data collection. Reflect natural social settings. For instance: submerging
yourself in a new society to study them.
♦ Whether the results of a study can be generalized beyond the
people/cases analyzed by the researcher.
♦ Representative sample: sa ple that is similar to the population
in all important respects.
Laws and principles.
Nomothetic explanation: general laws and principles. Probabilistic approach. Mostly related to
Have 3 criteria of causation.
⇒ The proposed cause and the proposed effect have to vary together.
• i.e. as the number of violent movies watched changes, the level of violence
would change as well.
b. Time order:
⇒ The proposed cause must precede the effect in time.
• Show that the increase in the watching of violent movies came BEFORE the
increase in violence.
⇒ Alternative explanations for the correlation observed have to be ruled out.
⇒ Spurious: false or illegitimate.
Idiographic explanation: Qualitative approach: rich description of a person/group based on the
perceptions & feelings of the people studied.
Proximate, specific causes.
Complete, indepth explanation. Gathering Data
Questionnaire: set of questions/response items the respondent completes without aid from the
Interview schedule: questions designed to be asked by an interviewer; used in a structured
All respondents are asked exactly the same questions in the same order with the aid of a formal interview
Causality: a casual connection between variables. NOT same as correlation between the variables.
A lot of the above relate more to quantitative over qualitative. Many qualitative researchers state that they
have their own tools of measuring. One is trustworthiness:
General criterion used by some writers in assessing the quality of qualitative research.
4 specific criteria:
i. credibility: PARALLEL TO MEAREMENT AND INTERNAL VALIDITY :
how believable are the findings?
ntersubjectivity: con ition w/2+ observers of the same phenomena are
in an agreement as what they have observed.
ii. Transferability: PARALLEL TO EXTERNAL VALIDITY: findings apply to
iii.Dependability: PARALLEL TO RELIABILITY: findings likely to be consistent
iv. Confirmability: PARALLEL TO REPLICABILITY: would another researcher
come to the same conclusion?
Experiments are rare in sociology because it is difficult to manipulate(manipulating independent variable to
study its impact on the dependent variable) the ideas, concepts that are interesting to sociologists.
The concepts that sociologists do want to study are complex, longterm causes that cannot be easily
simulated in experiments.
i.e. Feminism, gender roles, etc.
laboratory experiments: takes place in an artificial setting.
field experiments: takes place in reallife surroundings.
Classic experimental Design
subjects are randomly assigned to two groups. Manipulation is carried out on the experimental/treatment group.
The other group is not given the treatment – control group.
Dependent variable to measured before the experimental manipulation to make sure the two groups are
roughly equal at the start.
Face validity: an indicator(something employed to measure a concept when no direct measure is
available) appears to measure the concept in question.
External validity: results of a study can be generalized beyond the context in which they were
Nearly all experiments in social sciences involve a deception of some sort.
It raises ethical questions – form of lying.
Researcher has greater control over the research environment. Week 2 09/18/2013
Choice of Design
The type of design chosen depends on the kind of explanation sought.
Interested in cause and effect, general laws and principles.
Attempt to generalize to broader population
e.g., ‘The prevalence of suicide in a particular social group is a function of the level of integration individuals
typically have in the group.’
Rich description of a person or group.
Not meant to apply to persons or groups who were not part of the study.
e.g., ‘Jade became addicted to crack because she never got over her parents’ divorce, she felt she was
never really accepted by her friends, and had a classmate who offered her crack,’ plus many more details of
how Jade interpreted her life.
Criteria for Evaluating Social Research
Reliability – Results should remain the same each time a particular measurement technique is used on
the same subject.
Unless things change and results reflect those changes.
Results should not be arbitrary.
Replicability – The results remain the same when others repeat all or part of a study.
The procedures used to conduct the research are sound.
If nobody else can find what you found, then is it really valuable?
People need to describe their methods in such depths that someone else can go out and conduct the same
research and come to a same result.
Validity – three types: Week 2 09/18/2013
1) measurement/construct validity
2) internal validity,
3) external validity.
1) Measurement validity (or construct validity): involves the question, ‘Are you measuring what you
want to measure?’
e.g., Is education a valid measure of socioeconomic status?
Measuring what you want to measure.
2) Internal validity: can the study establish causal ordering? Do we know which is the cause and which is
e.g., did the study establish that personal income level in Canada really is influenced by one’s level of
education? Could income be influenced by something else?
To distinguish between a cause and effect.
just because there is a relationship does not necessarily mean it causes it.
Needs to have a CAUSAL relationship.
Two key terms used when discussing causation:
Independent variable: the proposed cause – the predictor,
e.g., ‘level of education’
Dependent variable: the proposed effect – the outcome,
e.g., ‘level of income’
Occurs second as a result of the independent variable.
We assume that independent causes dependent.
3) External validity: two primary concerns: Week 2 09/18/2013
Are the findings applicable to situations outside the research environment?
Naturalistic studies tend to satisfy this criterion.
Can the findings be generalized beyond the people or cases studied?
Studies using representative samples tend to satisfy this criterion.
Can you take what you studied and apply it to situations outside the research group? The more, the better.
Research Designs: Experiments
Rare in sociology/criminology/legal studies.
Many variables of interest are not subject to experimental manipulation.
Ethical concerns preclude performing experiments.
Many phenomena of interest have longterm, complex causes that cannot be simulated in experiments.
Even where applicable, experimental models do not get at the perceptions and feelings of research
Two kinds of experiments:
Field experiments are conducted in reallife surroundings
e.g., classrooms, prisons, neighbourhoods.
Laboratory experiments take place in artificial environments/
Controls research environment to be able to isolate what they think is the cause. Rules out other factors.
Need to randomly assign people to groups.
if you don’t, there can be systematic differences which indicate there is a correlation but actually isn’t.
Easier to randomly assign research subjects.
Therefore enhanced internal validity.
Easier to replicate. Week 2 09/18/2013
Key concepts relevant to experiments:
Experimental or treatment group: receives a treatment or manipulation of some kind. Something
has been done.
Control group: does not get the treatment or manipulation. Nothing has been done.
Random assignment: participants are placed in the experimental or control group using a random
Pretest: measurement of the dependent variable before the experimental manipulation. Need to know
what the general conditions are so you can see what the changes will/might occur.
Posttest: measurement of the dependent variable after the experimental manipulation.
Classic Experimental Design
Independent and dependent variables are identified.
What it is that you want to measure.
The dependent (outcome) variable is observed or measured (pretest) in each of the control and treatment
groups at time 1.
The treatment group receives the treatment/manipulation while the control group is left alone.
The dependent (outcome) variable is observed or measured (posttest) in each of the groups at time 2.
Differences between each group are compared.
Ideally change will only occur in the treatment group.
True experiments try to eliminate all other possible (rival) explanations due to:
Threats to internal validity in experiments that lack random assignment and/or the presence of a control
History: some event occurring after the treatment was given may have influenced the dependent variable.