Online notes

5 Pages
236 Views
Unlock Document

Department
Statistical Sciences
Course
STA437H1
Professor
Radford Neal
Semester
Fall

Description
Notes for STA 437/1005 — Methods for Multivariate Data Radford M. Neal, 26 November 2010 Random Vectors Notation: ′ ′ Let X be a random vector with p elements, so that X = [X ,...,X ] , 1here p denotes transpose. (By convention, our vectors are column vectors unless otherwise indicated.) We denote a particular realized value of X by x. Expection: ▯ The expectation (expected value, mean) of a random vector X is E(X) = xf(x)dx, where f(x) is the joint probability density function for the distribution of X. We often denote E(X) by µ, with µ = E(X j being thj expectation of the j’th element of X. Variance: 2 The variance of the random variable X is Vaj(X ) = E[(j −E(X )) ], jhich wejsome- times write as σ 2. j ▯ The standard deviation of X is j Var(X )j= σ . j Covariance and correlation: The covariance of X and X is Cov(X ,X ) = E[(X −E(X ))(X −E(X ))], which we j k j k j j k k 2 sometimes write as σ . jkte that Cov(X ,X ) isjthejvariance of X , so σ j jj= σ j The correlation of X anj X is Cok(X ,X )/(σ σj), khichjweksometimes write as ρ . jk Note that correlations are always between −1 and +1, and ρ jj is always one. Covariance and correlation matrices: The covariances for all pairs of elements of X = [X ,..1,X ] p′ can be put in a matrix called the covariance matrix:   σ11 σ 12 ··· σ 1p    σ21 σ 22 ··· σ 2p  Σ =  . . . .   . . . .  σp1 σ p2 ··· σ pp Note that the covariance matrix is symmetrical, with the variances of the elements on the diagonal. ′ The covariance matrix can also be written as Σ = E [(X − E(X))(X − E(X)) ]. Similarly, the correlations can be put into a a symmetrical correlation matrix, which will have ones on the diagonal. 1 www.notesolution.com Multivariate Sample Statistics Notation: Suppose we have n observations, each with values for p variables. We denote the value of variable j in observation i by x ijand the vector of all values for observation i by x .i We often view the observed x as i random sample of realizations of a random vector X with some (unknown) distribution. The is potential ambiguity between the notation x for ibservation i, and the notation x j for a realization of the random variable X .j(The textbook uses bold face for x .) i I will (try to) reserve i for indexing observations, and use j and k for indexing variables, but the textbook somtimes uses i to index a variable. Sample means: ▯n The sample mean of variable j is x ¯ = 1 x . j n i=1 ij The sample mean vector is x ¯ = [x ¯ ,...,x¯ ] . 1 p If the observations all have the same distribution, the sample mean vector, x ¯, is an unbiased estimate of the mean vector, µ, of the distribution from which these observations came. Sample variances: ▯n The sample variance of variable j is s j = n−1 (xij x ¯j) . i=1 If the observations all have the same distribution, the sample variance, s , js an estimate 2 of the variance, σ j of the distribution for X , jnd will be an unbiased estimate if the observations are independent. Sample covariance and correlation: ▯n The sample covariance of variable j with variable k is 1 (x −x ¯ )(x −x ¯ ). n−1i=1 ij j ik k The sample covariance is denoted by s . Note that s equals s , the sample variance of jk jj j variable j. The sample correlation of variable j with variable k is s /jk s j koften denoted by r . jk Sample covariance and correlation matrices: The sample covariances may be arranged as the sample covariance matrix:   s11 s12 ··· s 1p  s21 s22 ··· s 2p  S =  . . . .   . . . .  sp1 sp2 ··· s pp 1 ▯n ′ The sample covariance matrix can also be computed as S = n−1 (xi− x¯)(x i x¯) . i=1 Similarly, the sample correlations may be arranged as the sample correlation matrix, some- times denoted R (though the textbook also uses R for the population correlation matrix). 2 www.notesolution.com Linear Combinations of Random Variables ′ Define the random variable Y = a X +a X +1··1a X 2 w2ich can be pripten as Y = a X, where a = [a ,1 ,.2.,a ] .p ′ ′ ′ Then one can show that E(Y ) = a µ and Var(Y ) = a Σa, where µ = E(X) and Σ is the covariance matrix for X. For a random vector of dimension q defined as Y = AX, with A being a q × p matrix, one can show that E(Y ) = Aµ and Var(Y ) = AΣA , where Var(Y ) is the covariance matrix of Y . Similarly, if x is the i’th observed vector, and we de
More Less

Related notes for STA437H1

Log In


OR

Join OneClass

Access over 10 million pages of study
documents for 1.3 million courses.

Sign up

Join to view


OR

By registering, I agree to the Terms and Privacy Policies
Already have an account?
Just a few more details

So we can recommend you notes for your school.

Reset Password

Please enter below the email address you registered with and we will send you a link to reset your password.

Add your courses

Get notes from the top students in your class.


Submit