Study Guides (292,245)
AUS (8,221)
UniMelb (952)
All (1)

# ECON10005 Study Guide - Regression Analysis, Tachykinin Receptor 1, Prediction Interval

17 pages48 viewsFall 2015

Department
Economics
Course Code
ECON10005
Professor
All

This preview shows pages 1-3. to view the full 17 pages of the document.
Lecture One:
Descriptive statistics is a process of concisely summarising the characteristics of sets of data.
Inferential statistics involves constructing estimates of these characteristics, and testing hypotheses
about the world, based on sets of data.
Modelling and analysis combines these to build models that represent relationships and trends in
reality in a systematic way.
Types of Data:
Numerical or quantitative data are real numbers with specific numerical values.
Nominal or qualitative data are non-numerical data sorted into categories on the basis of qualitative
attributes.
Ordinal or ranked data are nominal data that can be ranked.
- The population is the complete set of data that we seek to obtain information about
- The sample is a part of the population that is selected (or sampled) in some way using a
sampling frame
- A characteristic of a population is call a parameter
- A characteristic of a sample is called a statistic
- The difference between our estimate and the true (usually unknown) parameter is the
sampling error
- In a random sample, all population members have an equal chance of being sampled
In a population, a perfect strata would be a group with:
- individual observations that are similar to the other observations in that strata
- different characteristics from other strata in the population
- stratified sampling can improve accuracy
- may be more costly
In a population, a perfect cluster would be a group with:
- individual observations that are different from the other observations in that cluster
- similar characteristics to other clusters in the population
- can reduce costs
- may be less accurate
This is the cost/accuracy trade off
Lecture Two:
Ceteris paribus: the assumption of holding all other variables constant
Cross-sectional data are:
- collected from (across) a number of different entities (such as individuals, households, firms,
regions or countries) at a particular point in time
- usually a random sample (but not always)
- not able to be arranged in any “natural” order (we can sort or rank the data into any order
we choose)
- often (but not only) usefully presented with histograms
Time series data are:
- collected over time on one particular ‘entity’
- data with observations which are likely to depend on what has happened in the past
[ECON10005: QUANTITATIVE METHODS 1:
LECTURE REVISION NOTES]

Unlock to view full version

Only half of the first page are available for preview. Some parts have been intentionally blurred.

- data with a natural ordering according to time
- often (but not only) presented as line charts
Lecture Three:
Measures of Centre:
Mean/Average: population: µ sample: x̄ = 
- easy to calculate
- sensitive to extreme observations
Median: middle number, or average of two middle numbers
- not sensitive to extreme observations
Mode: most frequently occurring number
- only used for finding most common outcome
x̄ µ = sampling error
If a distribution is uni-modal then we can show that it is:
- Symmetrical if mean = median = mode
- Right-skewed if mean > median > mode
- Left-skewed if mode > median > mean
Measures of Variation:
Population variance measures an average of the squared deviations between each observation and
the population mean: σ2 =

Population standard deviation is the square root of population variance: σ =

Sample variance measures the average of the squared deviations between each observation and the
sample mean: s2:

Sample standard deviation is the square root of sample variance: s =

Coefficient of variation measures the variation in a sample (given by its standard deviation) relative
to that sample’s mean, it is expressed as a percentage to provide a unit-free measurement, letting us
compare difference samples: CV = 
%
Lecture Four:
Measures of Association:
Covariance measures the co-variation between two sets of observations.
With a population size N having observations (xi, yi), (x2, y2), (xN, yN) etc. and having μx, μy, being the
respective means of the xi and yi terms, covariance is calculated as,




Unlock to view full version

Only half of the first page are available for preview. Some parts have been intentionally blurred.

If we have a sample of size n, with sample means and , the covariance is calculated as



Problems with covariance: it is difficult to interpret the strength of a relationship because covariance
is sensitive to units.
Correlation gives us a measure of association which is not affected by units.
Sample correlation coefficient: 
, sx and sy are sample standard deviations
Population correlation coefficient: 
, σx and σy are population standard deviations
- r=1, perfect positive linear relationship
- r=-1, perfect negative linear relationship
- r=0, no linear relationship
Lecture Five:
A random experiment is a procedure that generates outcomes that are not known with certainty
until observed.
A random variable (RV) is a variable with a value that is determined by the outcome of an
experiment.
A discrete random variable has a countable number (K) of possible outcomes with each having a
specific probability associated with each.
Univariate data has one random variable.
Bivariate data has two random variables.
If X is a random variable with K possible outcomes, then an individual value of X is written as xi,
i=1,2,3…K
The probability of observing X is written as P(X=xi) or p(xi) where
- 
- 
- That is, all probabilities must lie between 0 and 1 and all added together equal 1 in total
Expected Value/Mean of a random variable: is the value of x one would expect to get on average
over a large/infinite number of repeated trials: µx = E(X) = 
Variance of a random variable: is the probability-weighted average of all squared deviations
between each possible outcome with the expected value: σ2 = V(X) =  or
Lecture Six:
Rules of Expected Values and Variances:
- E(a) = a V(a) = 0
- E(aX) = aE(X) V(aX) = a2V(X)
- E(a + x) = a + E(X) V(a + X) = V(X)
- E(a + bX) = a + b(X) V(a + bX) = b2V(X)