STA215H5 Chapter Notes - Chapter 15-18: Birth Weight, Unimodality, Preterm Birth

42 views3 pages
23 Jun 2018
School
Department
Course
Professor
STA215; Chapter Notes (15-18)
Sampling Distributions
Often, the goal of a study is to try to draw conclusions about how the values of a
variable X are distributed in a population using the information form a sample of
size n. I
In real life, we observe only one sample. I
Now we will explore what happens if we had many samples.
How can you compare 2 estimates of a parameter?
The observed value of a statistic depends on the sample chosen and will vary
from sample to sample.
We cannot decide which estimate is “better” based on a single sample.
Sample statistics are random variables themselves - should be compared using
their probability distributions
The Sampling Distribution of a statistic is the probability distribution of the statistic. It is
obtained by calculating the value of the statistic for each sample (over repeated
samples) and finding their associated probabilities:
For populations with a small number of different samples, we can calculate the
distribution exactly.
In practice, populations will consist of too many different samples so we use
computer software to simulate possible samples and record the proportion of
times different values of the statistic occurs. 1 / 13
Point Estimators
A point estimator of a parameter is a rule/formula that calculates a single value to
estimate the parameter.
A point estimate is the value that the point estimator takes on.
For ex, X¯ = Pn i=1 Xi is the point estimator of µ (the mean) and x¯ = 15 is the point
estimate
Often there will be many point estimators for one parameter. We can examine the
sampling distribution of each point estimate to see how large the difference is between
the estimate and the true value of the parameter (called the error of estimation). 2 / 13
Properties of Sampling Distributions
If the sampling distribution of a statistic has mean equal to the parameter, we call the
statistic an unbiased estimate of the parameter. Otherwise, it is called biased.
The standard deviation of the sampling distribution of a statistic is called the standard
error (se).
Unbiasedness: A desirable property- If a statistic is a “good” estimate of a
parameter we would expect the distribution to cluster/center around the value it is
trying to estimate (the parameter).
Minimum Variance: We also want the estimate to have a small standard error as
it measures the spread of the distribution of the statistic.
Unlock document

This preview shows page 1 of the document.
Unlock all 3 pages and 3 million more documents.

Already have an account? Log in

Document Summary

Often, the goal of a study is to try to draw conclusions about how the values of a variable x are distributed in a population using the information form a sample of size n. i. In real life, we observe only one sample. Now we will explore what happens if we had many samples. The observed value of a statistic depends on the sample chosen and will vary from sample to sample. We cannot decide which estimate is better based on a single sample. Sample statistics are random variables themselves - should be compared using their probability distributions. The sampling distribution of a statistic is the probability distribution of the statistic. It is obtained by calculating the value of the statistic for each sample (over repeated samples) and finding their associated probabilities: For populations with a small number of different samples, we can calculate the distribution exactly.

Get access

Grade+20% off
$8 USD/m$10 USD/m
Billed $96 USD annually
Grade+
Homework Help
Study Guides
Textbook Solutions
Class Notes
Textbook Notes
Booster Class
40 Verified Answers
Class+
$8 USD/m
Billed $96 USD annually
Class+
Homework Help
Study Guides
Textbook Solutions
Class Notes
Textbook Notes
Booster Class
30 Verified Answers

Related Documents