Textbook Notes (368,192)
Canada (161,707)
York University (12,820)
ADMS 2320 (20)
all (7)
Chapter

ADMS 2320 Chapter Summary.doc

23 Pages
163 Views
Unlock Document

Department
Administrative Studies
Course
ADMS 2320
Professor
All Professors
Semester
Fall

Description
Chapter 9: Sampling Distribution • A sampling distribution is created by, as the name suggests, sampling. • The method we will employ on the rules of probability and the laws of expected value and variance to derive the sampling distribution. • Sampling Distribution of the Sample Mean • E(X ) = μ x μ = ∑ xP (x) • V(X) =σ xσ /n2 σ xσ / n where σ =2 ∑ x P (x) − μ 2 • If X is normal,X is normal. If X is nonnormal, X is approximately normal for sufficiently large sample sizes. o Note: the definition of “sufficiently large” depends on the extent of nonnormality of X (e.g. heavily skewed; multimodal) o If the population is normal, then X is normally distributed for all values of n o If the population is non-normal, then is approximately normal only for larger values X of n o In many practical situations, a sample size of 30 may be sufficiently large to allow us to use the normal distribution as an approximation for the sampling distribution Xf . • The standard deviation of the sampling distribution is called the standard error: • Remember that μ and σ 2are the parameters of the population of X • To create the sampling distribution of X, we repeatedly drew samples of size 2 from the population and calculated for each sample Thus, we treatX as a brand new random variable, with its own distribution, mean, and variance • Central Limit Theorem: The sampling distribution of the mean of a random sample drawn from any population is approximately normal for a sufficiently large sample size. The larger the sample size, the more closely the sampling distribution ofX will resemble a normal distribution. • Statisticians have shown that the mean of the sampling distribution is always equal to the mean of the population and that the standard error is equal to n for infinitely large populations. However, if the population is finite the standard error is σ N − n x n N −1 where N is the population size and N − n is called the finite population correction factor. If the N −1 population size is large relative to the sample size the finite population correction factor is close to 1 and can be ignored. • As a rule of thumb we will treat any population that is at least 20 times larger than the sample size as large. In practice most applications involve populations that qualify as large. As a consequence the finite population correction factor is usually omitted. • The sampling distribution can be used to make inferences about population parameters. In order to do so, the sample mean can be standardized to the standard normal distribution using the following formulation: Z = X − μ σ / n σ σ • Another way to state the Probability: P(μ − Zα /2 < X < μ + Zα /2 ) =1−α n n Sample Distribution of a Proportion • The estimator of a population proportion of successes is the sample proportion. That is, ˆ X • we count the number of successes in a sample and compute: P = n • Using the laws of expected value and variance, we can determine the mean, variance, and • standard deviation of . (The standard deviation of is called the standard error of the proportion.) ○ E (P ) = μ = p ˆp 2 p(1− p) ○ V (P) = σ Pˆ= n ○ σ = p(1 − p) / n Pˆ • Sample proportions can be standardized to a standard normal distribution using this formulation: P − p o Z = p(1 − p) / n • The final sampling distribution introduced is that of the difference between two sample means. • This requires: independent random samples be drawn from each of two normal populations • If this condition is met, then the sampling distribution of the difference between the two sample means, i.e. X 1 X 2 will be normally distributed. • Note: If the two populations are not both normally distributed, but the sample sizes are “large” (>30), the distribution of X 1 X 2 is approximately normal. • The expected value and variance of the sampling distribution of are given by: • Mean: E(X −1X ) =1μ x1−x2= μ 1 μ 2 2 2 V (X − X ) = σ 2 = σ 1 + σ 2 • Variance: 1 1 x1−2 n n (also called the standard error if the difference 1 2 between two means) 2 2 σ 1 σ2 • Standard deviation: σ 1 −2= + n1 n2 Z = (X 1 X )2−(μ − μ1) 2 2 2 • We can compute Z (standard normal random variable) in this way: σ 1 σ 2 + n1 n2 Chapter 10: Introduction to Estimation • Statistical inference is the process by which we acquire information and draw conclusions about populations from samples. There are two general procedures for making inferences about population: Estimation and Hypothesis. • The objective of Estimation is to determine the approximate value of a population parameter on the basis of a sample statistic. E.g., sample mean () is employed to estimate the population mean (μ). • A Point Estimator draws inferences about a population by estimating the value of an unknown parameter using a single value or point. Drawbacks: (1) The point probabilities in continuous distributions were virtually zero. (2) We’d expect that the point estimator gets closer to the parameter value with an increased sample size, but (3) point estimators don’t reflect the effects of larger sample sizes. • An Interval Estimator draws inferences about a population by estimating the value of an unknown parameter using an interval. For example, suppose we want to estimate the mean summer income of a class of business students. For n=25 students, (point estimate) is calculated to be 400 $/week. Interval X estimate is: the mean income is between $380 and $420 /week. • Desirable qualities in Estimators include: • Unbiasedness – An unbiased estimator of a population parameter is an estimator whose expected value is equal to that parameter. E.g. the sample meanX is an unbiased estimator of the population mean μ , since: E(X ) = μ • Consistency – An unbiased estimator is said to be consistent if the difference between the estimator and the parameter grows smaller as the sample size grows larger. E.g.X is a 2 consistent estimator of because: V(X ) isσ / n . That is, as n grows larger, the variance of X grows smaller. • Relative Efficiency – If there are two unbiased estimators of a parameter, the one whose variance is smaller is said to be relatively efficient. Both the sample median and sample mean are unbiased estimators of the population mean. However, statisticians have established that the sample median has a greater variance than the sample mean, so we choose X since it is relatively efficient when compared to the sample median. • We can calculate an interval estimator from a sampling distribution by: o Drawing a sample of size n from the population o Calculating its mean: X And, by the central limit theorem, we know that X is normally (or approximately normally) distributed so will have a standard normal (or approximately X − μ normal) distribution.Z = σ / n • Confidence Interval Estimator of μ: LCL and UCL: X ± Z α / 2 n o A larger confidence level produces a wider confidence interval. σ o Larger values of variance produce wider confidence intervals. o Increasing the sample size decreases the width of the confidence interval while the confidence level can remain unchanged. Note: this also increases the cost of obtaining additional data. • We can control the width of the interval by determining the sample size necessary to produce narrow intervals. Suppose we want to estimate the mean demand “to within 5 units” (W) ; i.e. we want to the σ σ interval estimate to be: X ± 5 , since: X ± Z α / 2 , it follows that Z α / 2 = 5 So n n Z σ 2 • Sample Size to Estimate a mean: n =  α /2   W  Chapter 11: Introduction to Hypothesis Testing • Five critical concepts in hypothesis testing: 1. There are two hypotheses, the null hypothesis (H ) and0the alternative hypotheses (H ). 1 2. The testing procedure begins with the assumption that the null hypothesis is true. 3. The goal is to determine whether there is enough evidence to infer that the alternative hypothesis is true. 4. There are two possible decisions:  Reject H –0conclude that there is enough evidence to support the alternative hypothesis.  Do not Reject H – 0onclude that there is not enough evidence to support the alternative hypothesis. 5. Two possible errors can be made:  Type I error: Reject a true null hypothesis. P(Type I error) = α  Type II error: Do not reject a false null hypothesis. P(Type II error) = β H 0s true H 0s false Reject H 0 Type I Error Correct Decision P(Type I error) = α Do not Reject H 0 Correct Decision Type II Error P(Type II error) = β • The Rejection Region is a range of values such that if the test statistic falls into that range, we decide to reject the null hypothesis in favor of the alternative hypothesis. o Z-test, α = .05, for H1 < 100  Rejection Region: Z < – Z < – Z α .05 – 1.645 o Z-test, α = .01, for H1 > 100  Rejection Region: Z > Z > Z < 2.α3 .01 o α = .05, for H1 ≠ 100  R. R.: Z Z α/2(Z .025 1.96) • The p-value of a test is the probability of observing a test statistic at least as extreme as the one computed given that the null hypothesis is true. • The smaller the p-value, the more statistical evidence exists to support the alternative hypothesis. o If the p-value is less than 1%, there is overwhelming evidence that supports the alternative hypothesis. o If the p-value is between 1% and 5%, there is a strong evidence that supports the alternative hypothesis. o If the p-value is between 5% and 10% there is a weak evidence that supports the alternative hypothesis. o If the p-value exceeds 10%, there is no evidence that supports the alternative hypothesis. • If the p-value is less than α, we judge the p-value to be small enough to reject the null hypothesis. If the p-value is greater than α, we do not reject the null hypothesis. • Conclusions of a Test of Hypothesis: o If we reject the null hypothesis, we conclude that there is enough evidence to infer that the alternative hypothesis is true. o If we do not reject the null hypothesis, we conclude that there is not enough statistical evidence to infer that the alternative hypothesis is true. o Remember: The alternative hypothesis is the more important one. It represents what we are investigating. • The Probability of a Type II Error: It is important that that we understand the relationship between Type I and Type II errors; that is, how the probability of a Type II error is calculated and its interpretation. • A Type II error occurs when a false null hypothesis is not rejected. E.g. α = .05 n = 20 σ = .005 H : μ = 2.25 H : μ > 2.25 0 1 x − μ Rejection region: σ / n > Zα x > Zα(σ / n) .05 x − 2.25 > Z x 2.25 2.252 .005 / 20 .05 1.645  > 2.252 β = P( x < 2.252 given μ = 2.255) 3  x − μ 2.252 − 2.255  β = .0037 9 β = P  <  = P(z < −2.68) = .5 – .4963 . σ / n .005 / 20  2.252 2.255 β = .0037 z: -2.86 0 • Decreasing the significance level α , increases the value β of and vice versa. • Consider this diagram again. Shifting the critical value line to the right (to decrease ) will mean a larger area under the lower curve for … (and vice versa) • if the probability of a Type II error (β) is judged to be too large, we can reduce it by o increasing α, and/or o increasing the sample size, n. • Increasing the sample size n and/or increasing α, the value of β will decrease. • The power of a test is defined as 1– β. It represents the probability of rejecting the null hypothesis when it is false. • When more than one test can be performed in a given situation, it is preferable to use the test that is correct more often. If one test has a higher power than a second test, the first test is said to be more powerful and the preferred test. Chapter 12: Inference about a Population • When the population standard deviation is unknown and the population is normal, the test statistic x − μ for testing hypotheses about μ is: t = s / n s • Confidence Interval Estimator of μ when σ is unknown: x ± α / 2 n Which is Student distributed with v = n – 1 degree of freedom Rejection Region: o t < – t α,v v = n – 1 o t > t v = n – 1 α,v o t < – t α/2,vor t > tα/2,v v = n – 1 • The Student t distribution is robust, which means that if the population is nonnormal, the results of the t-test and confidence interval estimate are still valid provided that the population is “not extremely nonnormal”. • To check this requirement, draw a histogram of the data and see how “bell shaped” the resulting figure is: If a histogram is extremely skewed and could be considered “extremely nonnormal”, the t- statistics would be not be valid in this case. Estimating the Totals of Finite Populations • Large populations are defined as “populations that are at least 20 times the sample size” • We can use the confidence interval estimator of a mean to produce a confidence interval estimator of  s  the population total: N  ± t α / 2  Where N is the size of the finite population.  n  Inference about a Population Variance • If we are interested in drawing inferences about a population’s variability, the parameter we need to 2 investigate is the population variance: σ • The sample variance (s ) is an unbiased, consistent and efficient point estimator for σ . Moreover, 2 2 (n −1)s the statistic, X = 2 , has a chi-squared distribution, with n–1 degrees of freedom. σ • Confidence Interval Estimator of σ 2 : 2 (n − 1)s o Lower Confidence Limit (LCL): X α / 2, n−1 (n − 1)s 2 o Upper Confidence Limit (UCL): 2 X 1−α / 2 , n −1 • Rejection Region o X 2 < X 1−α , n −1 2 2 o X > X α , n−1 2 2 2 2 o X < X 1−α / 2, n−1 X > X α / 2, n−1 • Decreasing the sample size decrease the test statistic and increase the p-value of the test. • Increase the sample size narrows the intervals. Inference about a Population Proportion • When data are nominal, we count the number of occurrences of each value and calculate • proportions. • Thus, the parameter of interest in describing a population of nominal data is the population • proportion p. • This parameter was based on the binomial experiment. X • Recall the use of this statistic:P = where p-hat ( Pˆ ) is the sample proportion and x successes in n a sample size of n items. • When np and n(1–p) are both greater than 5, the sampling distribution of P is approximately normal with mean: μ = p. and Standard deviation: σ = p (1 − p ) / n z = p − p • Test Statistic for p: p(1 − p) / n • The confidence interval estimator for p is given by: p ± zα / 2 p(1 − p) / n 2 • Selecting the Sample Size: n =  zα / 2p(1 − p)   W  • Estimating the Total Number of Successes in a Large Finite Population: N p ± z α / 2 p(1− p)/ n ) Chapter 13: Inference About Comparing Two Populations • In order to test and estimate the difference between two population means, we draw random samples from each of two populations. • Sampling Distribution of x 1 x 2: 1. x − x is normally distributed if the original populations are normal or approximately 1 2 normal if the populations are non-normal and the sample sizes are large (n1, n2 > 30) 2. The expected value of x1− x 2 is μ 1 μ 2 2 2 2 2 3. The variance of x 1 x 2 is σ 1n +1 /n 2 2 and the standard error is: σ 1n +1 /n 2 2 • When Population Variances are known and the sampling distribution of is normal or approximately normal we can build the test statistic (z-test) and the Interval estimator: (x 1 x )2(μ −μ )1 2 z = σ 2 σ 2 σ 2 σ 2 (x1− x 2± z α /2 1 + 2 1 + 2 n1 n2 n 1 n2 • However, the z-statistics is rarely used because the population variances σ 2 and σ 2 are always 1 2 unknown. Instead we use a t-statistic. We consider two cases for the unknown population variances: when we believe they are equal and conversely when they are not equal. • Since the population variances are unknown, we can not know for certain whether they are equal, but 2 2 we can conduct F-test σ 1σ 2 of to determine whether the two population variances differ. 2 2 • Test Statistic and Interval estimator for μ 1 μ 2when the two population variances are equal σ 1σ 2 (x1− x 2 − (μ −1μ ) 2 t = 2 (n 11)s + 1n −1)2 2 2 1 1  2 1 1  where S p (x1− x )2± t α / 2 sp +  s p +  n1+ n 2 2  n1 n 2  n1 n 2 • Rejection Region: t > tα ,v , t < −t α ,v , (t > tα / 2,vr t < −tα / 2 ,v v = n 1 n −22 • Test Statistic and Interval estimator for μ − μ when the two population variances are not equal 1 2 σ ≠σ 2: 1 2 (x − x )−(μ −μ ) t = 1 2 1 2 2 2 2 2 s1 s2 s1 s 2  (x1− x 2 ± t α /2 +  +  n1 n 2 n 1 n 2  2 2 2 (s1 n 1 s /2 ) 2 d.f.: v = 2 (s n ) 2 (s n ) 1 1 + 2 2 n 11 n 21 Rejection Region: t > tα ,v , t < − tα ,v , (t > tα / 2,vr t < −tα / 2 ,v • The number of degrees of freedom associated with the equal-variances test statistic and confidence interval estimator is always greater than or equal to the number of degrees of freedom associated with the unequal-variances test statistic and confidence interval estimator for given sample sizes n1 2 2 2 n +n −2≥ (s1n +1 /n 2 2 1 2 2 2 2 2 and n 2: (s1n )1 (s2n )2 + n 11 n2−1 • Larger numbers of degrees of freedom have the same effect as having larger sample sizes, and larger sample sizes yield more information by producing more powerful tests (lower type II error probabilities) and narrower confidence interval estimation. Therefore, whenever there is insufficient evidence that the variances are unequal, it is preferable to perform the equal variances t-test. • Hypothesis Testing and Estimating a Ratio of two Variances o H0:μ D 0 where μ = D −μ 1 2 σ 2 σ 2 σ 2 σ 2 o H0: 12=1 H 1 2≠1 or H 1 2 >1 or H 1 2 <1 σ2 σ 2 σ 2 σ 2 S 2 o Test Statistic: F = 2 S 2 1 o Rejection Region: F > F α ,1 −12n −1 F < , F > F α / 21,n 21,n −1 Fα ,2 −11n −1 1 F < Fα / 2 ,n −1,n −1 2 1  2  o Confidence Interval Estimator: LCL =  s1  1  s2  F  2  α /2,1 −2,n −1 2  s1  UCL =  2 Fα /2,2 −11n −1  s 2  • Matched Pairs Experiment: an observation in one sample is matched with an observation in a second sample, this is called a matched pairs experiment. • Hypothesis Test: o H0:μ D 0 where μ = D −μ 1 2 H 1μ ≠D0 or H 1μ >D0 or H 1μ tα ,D−1 , t < −t α ,D−1 , (t > tα / 2Dn −1or t < −tα / 2Dn −1 t = xD− μ D o Test Statistic: s / n D D s o Confidence Interval Estimator: x D t α / 2 D nD • Inference about the difference between two population proportions: As mentioned previously, with nominal data, calculate proportions of occurrences of each type of outcome. Thus, the parameter to be tested and estimated in this section is the difference between two population proportions: p 1 p .2 • To draw inferences about the parameter p –p ,1we 2ake samples of population, calculate the sample proportions and look at their difference. x1 x2 o p1= and p 2 n1 n2 p − p ˆ • Sampling Distribution of 1 2 o The statistic p 1 p ˆ2 is approximately normally distributed if the sample sizes are large enough so that n1p 1 n (11− p ), 1 p , a2d 2 (1 − p ) 2 ˆ2 are great than or equal to 5. ˆ ˆ ˆ ˆ o The Mean of p 1 p 2 is E( p 1 p ) 2 p − p 1 2 p (1− p ) p (1− p ) o The Variance of p − p ˆ is V (p − p ) = 1 1 + 2 2 1 2 1 2 n n 1 2 p (1− p ) p (1− p ) The standard error is σ p − p= 1 1 + 2 2 1 2 n 1 n 2 Hypothesis Test for two population proportions • Case 1: Test Statistic for p 1 p 2 o H 0(p −1p ) 2 0 H 1(p −1p ) ≠20 or H 1(p −1p ) >20 or H 1(p − 1 ) <20 o Rejection Region: Z > Z , Z < −Z , (Z > Z or Z < −Z ) α α α /2 α /2 (p 1 p )2 z = x + x o Test Statistic:  1 1  where p = 1 2 p(1− p)  +  n + n n 1 n 2  1 2 • Case 2: Test Statistic for p − p 1 2 o H 0 ( p 1 p ) 2 D H 1(p −1p ) ≠2D or H 1(p −1p ) >2D or H 1(p −1p ) <2D Z > Z , Z < −Z , (Z > Z or Z < −Z ) o Rejection Region: α α α /2 α /2 (p − p ) − D z = 1 2 o Test Statistic: p (1− p ) p (1− p ) 1 1 + 2 2 n n 1 2 • Confidence Interval Estimator for Both Case 1 & 2: p (1− p ) p (1− p ) o ( p1− p 2 ± z α / 2 1 1 + 2 2 n 1 n 2 • This formular is valid when n1p 1 n (1 − p ),1n p , 2nd2n (1 − p 2 ˆ2 are great than or equal to 5. Chapter 15: Analysis of Variance • Analysis of variance is a technique that allows us to compare two or more populations of interval data. Analysis of variance is: o an extremely powerful and widely used procedure. o a procedure which determines whether differences exist between population means. o a procedure which works by analyzing sample variance. • Independent samples are drawn from k populations. Note: These populations are referred to as treatments. It is not a requirement that n =1n = 2 = n . k • x is the response variable, and its values are responses. • xijefers to the i observation in the j sample. E.g. x is t3,53rd observation of the 5th sa
More Less

Related notes for ADMS 2320

Log In


OR

Join OneClass

Access over 10 million pages of study
documents for 1.3 million courses.

Sign up

Join to view


OR

By registering, I agree to the Terms and Privacy Policies
Already have an account?
Just a few more details

So we can recommend you notes for your school.

Reset Password

Please enter below the email address you registered with and we will send you a link to reset your password.

Add your courses

Get notes from the top students in your class.


Submit