PHIL2420 Lecture Notes - Lecture 8: Sample Size Determination, Spontaneous Remission, Acne Vulgaris
PHIL 2420
Critical Thinking
May 3, 2018
WEEK 8
How Data Can Mislead & Analyzing Scientific Results
Critical reasoning involves a certain kind of attitude.
Be careful when using any source, especially the internet.
Reject the null hypothesis
• Not always easy
• Hard part is figuring out what the null hypothesis is in the first place
• Easier if you have a result
If something is decided by a vote, it’s probably not a scientific fact.
How much of the things in websites that I don’t know is reliable?
Just cause its free doesn’t mean it’s no good.
Statistical Significance
• If P < 0.05, we have good evidence to reject Ho. (see previous lecture)
Scientific Reliability
• John Ioannidis – Professor of Medicine, Standford University
Schizophrenia Gene
• If the result of every test is published. . .
• There are 500 bogus results for every true one
find more resources at oneclass.com
find more resources at oneclass.com
P-Hacking . . . Or how to lie with data
• Nobody likes a negative or statistically insignificant result
• Torture the data until it confesses
o Run a different kind of analysis til you get the right answer
• Go fishing… or become a Texas sharpshooter
• Repeatability is of no use unless ALL results get published
• Exacerbated by Publish or Perish environment
o Pressure on people to have significant results
How to Become a Texas Sharpshooter
• Randomly shoot in barn
• Don’t draw circle where most of dots fall
• Random data tends to cluster… it’s not equally distributed
• If you throw dart thousand times then rule the bulls eye, you’re gonna be good at
throwing darts
• Get data THEN formulate hypothesis
• Cartoon of jellybeans
o By random chance alone, there’s a change 1/20 is significant
• There’s a lot of pressure on scientists to get a significant result
Publication Bias
• Authors more likely to submit papers with significant result
• Who wants to know that jellybeans don’t cause cancer?
• File-Drawer problem
• If result isn’t significant, it goes in the file drawer
• Makes Ioannidas’ problem worse
o More likely to believe 1/20 odds random result cause that’s the only thing
in the journals
Take Home Lesson 1
Just because it says so in a scientific journal doesn’t mean it’s true or even likely.
For a dissenting opinion to Ioannidas 2005, see Ashton 2018.
Sampling – process of determining properties of a population from the properties of a
sample of the population
• Bag of marbles
• 50% red 50% green
• Pick out 10, 6 green 4 red. Infer there are 60% green 40% red.
• What are the odds of getting 60-40 if 50-50?
• Make 50-50 null hypothesis
• Odds of choosing red
o Analysis of coin tossing apples to this situation for random sampling with
replacement
• Sampling is working out conditional possibility of composition of the marbles
• Drug cures disease
find more resources at oneclass.com
find more resources at oneclass.com
Document Summary
How data can mislead & analyzing scientific results. Critical reasoning involves a certain kind of attitude. Be careful when using any source, especially the internet. Reject the null hypothesis: not always easy, hard part is figuring out what the null hypothesis is in the first place, easier if you have a result. If something is decided by a vote, it"s probably not a scientific fact. Just cause its free doesn"t mean it"s no good. If p < 0. 05, we have good evidence to reject ho. (see previous lecture) Schizophrenia gene: john ioannidis professor of medicine, standford university. If the result of every test is published: there are 500 bogus results for every true one. How to become a texas sharpshooter: randomly shoot in barn, don"t draw circle where most of dots fall, random data tends to cluster it"s not equally distributed.