Study Guides (390,000)
CA (150,000)
UTSG (10,000)
ECO (800)
ECO100Y1 (100)
Midterm

ECO331_MidtermReview.docx


Department
Economics
Course Code
ECO100Y1
Professor
Robert Gazzale
Study Guide
Midterm

This preview shows pages 1-3. to view the full 15 pages of the document.
ECO331: Behavioural Economics
Midterm Review
Experiment Design; Internal & External Validity
Homo economicus we are utility maximizing and self-interested (doesn‟t depend on other
utilites), no cost of cognition, and our preferences are learned, while our beliefs are hard to
change.
We use standard assumptions because there‟s one way to be rational, canceling out
hypothesis, assumptions can be highly representative of real world, regression to the mean
even if people are irrantional/biased, parsimony simply things, perhaps they do well!
Induced Value Theory proper use of reward to induce behaviour. Must satisfy: monotonicty
more reward is preferred. Salience reward depends on actions. Dominance change in utility
comes dominantly from the reward.
NUISANCE VARS affect results, things you don‟t want to test. Block or randomize to
constrain.
Factorial Design - Vary all treatment variables independently to obtain
the clearest possible evidence on their effects
(Factorial design)
Avg payment > opportunity cost, simple, privacy, random allocation of treatments,
dmeographics.
Independent obvs: If knowing the value of X in no way improves my ability to predict
the value of Y, then X and Y are independent.
Vernon Smith lab results apply to world results
Charlie Plitt Reduce the number of plausibilities.
Slides: we cannot assign people education levels, so we conduct experiments. They allow a
variation of a causal factor while keeping others constant. Thus, a theory is not rejected, but
assumptions/causal factor.
To test a theory:
Internal Validity extent to which the treatment differences in Experiment are due to the
hypothesized mechanism. How right the measurement is to the cause. Assess plausibility of
alternative explanations for treatment differences. (GNR competition: Perhaps the experiment
took too long, the environment wasn‟t like a real market, the subjects were not random…etc.) If
they perform better, that means they‟re more competitive. Could mean that they took a break, or
got used to the experiment. Arguments must be made to back up assumptions account for

Only pages 1-3 are available for preview. Some parts have been intentionally blurred.

these differences. Maxed out effort for females in mixed competition. So, test for single gender
competition…improved, so it wasn‟t due to ability.
External Validity extent to which we can extrapolate experiment results to the real world. If the
sample is too small, or only students, it‟s hard. Being watched/researched. Can
1 Briefly explain the distinction between a within-subject design and a between-subject
design. What is the beauty of the within-subject design? If it is so beautiful, why isn't it
always used?
The distinction between a within-subject design and a between-subject design is the number of
treatments the subjects do. In a within-subject design, the subject does all treatments and in a
between-subject design, the subject receives one treatment. The beauty of within-subject
design is that because the subjects used in each treatment are the same, noise variables are
reduced, thereby reducing error variance. It allows experimenters to measure the change in the
variable within the same subjects, allows for more confidence. It isn‟t always used because of
carryover effects, which is when participation in one condition may affect participation in
another such as knowing experimenter demand, order effects, learning. Solutions: Randomly
assigned orders, crossovers (ABA), dual trial.
2. Consider Gneezy, Niederle and Rustichini (i.e., the competition paper we did discuss in
lecture). What experimenter demand effects might we be worried about? (Ideally you
should consider both “strong" and “weak" experimenter demand effects.) Should we be
worried about experimenter demand effects in this particular case?
No, we should not be worried about experimenter demand effects in this particular case as the
paper shows that participants were focused on completing the task at hand.
Experimenter demand effects - the perception or belief that the subject holds in regards to the
purpose of the experiment and what the experiment is looking for
Strong experimenter demand effects: Does the subject know what the experimenter wants? (ex.
compliant subject vs. rebel subject)
Weak experimenter demand effects: Treatment differences make differences salient (ex. tell you
others donate money to charity. Treatment A: gives you info about others, Treatment B: no info
about others -- when presented with info, “maybe I should use it...”
GNEEZY, NIEDERLE, RUSTICHINI (GNR)
Notes about this article in class:
T1: Piece Rate - no significant gender differences
T2: Competitive Pay: Mixed Tournament - performance of men is significantly higher (vs.
noncompetitive incentive schemes of piece-rate and random). Women do not significantly differ
in their performance in the mixed tournament and piece-rate scheme.
T3: Random Pay - no significant gender differences

Only pages 1-3 are available for preview. Some parts have been intentionally blurred.

T4: Single Gender Tournament - men did not do significantly different than in mixed
tournaments, but significant difference between single-sex tournaments vs. piece-rate and
random. Women in single-sex tournaments is significantly higher than in the noncompetitive
treatments. However, no significant gender difference in performance in the single-sex
tournament treatment
T5: Single Gender Piece Rate - conclusion: the increase in performance of women in single-sex
tournaments is due to the incentive scheme and not the absence of male participants. ABsense
of males = perform higher? No…incentives.
1. Generally, what determines whether a person exerts “higher” effort in competition is the
- confidence/chance of winning
- low cost of effort
- sufficient rewards (Trophy effect - like winning in general) →T4 vs. T2, same expected returns
- information effect → ranking → self confidence/opinion of others → T4 vs. T2, assuming no
gender effect
- shirking as a public good (ex. prof will curve therefore no effort/incentive for the class) → T4
vs. T2
- Preferences → choose not to compete in certain environments
- Risk aversion → T3
- Women/men have different maximums → T4 vs. T1
2. Any extra reason particular to this environment?
- Israel (relatively equal society)
- All join the military
- “Top” engineering students
- Must select into experiment
3. Attribute of the Task
- Empirically gender neutral
- Short duration
- Constant personal Feedback
- No inter-competitor feedback
- “Plausibly” gender biased
- Time constrained
- No penalty for failure
- Easily countable task
3. One potential explanation for competition not improving effort relative to a non-
competitive environment is that a cohort might have already been exerting near maximal
effort in the non-competitive environment. Ought we be concerned about this when
interpreting the main results of Gneezy, Niederle and Rustichini?
Cohort - group of subjects that participated in a particular session
No, because there was a condition in which women competed in same gender competitions and
were able to improve relative to a non-competitive environment, thereby proving they are ABLE
You're Reading a Preview

Unlock to view full version