Notes for STA 437/1005 — Methods for Multivariate Data
Radford M. Neal, 26 November 2010
Let X be a random vector with p elements, so that X = [X ,...,X ] , 1here p denotes
transpose. (By convention, our vectors are column vectors unless otherwise indicated.)
We denote a particular realized value of X by x.
The expectation (expected value, mean) of a random vector X is E(X) = xf(x)dx,
where f(x) is the joint probability density function for the distribution of X.
We often denote E(X) by µ, with µ = E(X j being thj expectation of the j’th element
The variance of the random variable X is Vaj(X ) = E[(j −E(X )) ], jhich wejsome-
times write as σ 2.
The standard deviation of X is j Var(X )j= σ . j
Covariance and correlation:
The covariance of X and X is Cov(X ,X ) = E[(X −E(X ))(X −E(X ))], which we
j k j k j j k k 2
sometimes write as σ . jkte that Cov(X ,X ) isjthejvariance of X , so σ j jj= σ j
The correlation of X anj X is Cok(X ,X )/(σ σj), khichjweksometimes write as ρ . jk
Note that correlations are always between −1 and +1, and ρ jj is always one.
Covariance and correlation matrices:
The covariances for all pairs of elements of X = [X ,..1,X ] p′ can be put in a matrix called
the covariance matrix:
σ11 σ 12 ··· σ 1p
σ21 σ 22 ··· σ 2p
Σ = . . . .
. . . .
σp1 σ p2 ··· σ pp
Note that the covariance matrix is symmetrical, with the variances of the elements on the
The covariance matrix can also be written as Σ = E [(X − E(X))(X − E(X)) ].
Similarly, the correlations can be put into a a symmetrical correlation matrix, which will
have ones on the diagonal.
www.notesolution.com Multivariate Sample Statistics
Suppose we have n observations, each with values for p variables. We denote the value of
variable j in observation i by x ijand the vector of all values for observation i by x .i
We often view the observed x as i random sample of realizations of a random vector X
with some (unknown) distribution.
The is potential ambiguity between the notation x for ibservation i, and the notation x j
for a realization of the random variable X .j(The textbook uses bold face for x .) i
I will (try to) reserve i for indexing observations, and use j and k for indexing variables,
but the textbook somtimes uses i to index a variable.
The sample mean of variable j is x ¯ = 1 x .
j n i=1 ij
The sample mean vector is x ¯ = [x ¯ ,...,x¯ ] .
If the observations all have the same distribution, the sample mean vector, x ¯, is an unbiased
estimate of the mean vector, µ, of the distribution from which these observations came.
The sample variance of variable j is s j = n−1 (xij x ¯j) .
If the observations all have the same distribution, the sample variance, s , js an estimate
of the variance, σ j of the distribution for X , jnd will be an unbiased estimate if the
observations are independent.
Sample covariance and correlation:
The sample covariance of variable j with variable k is 1 (x −x ¯ )(x −x ¯ ).
n−1i=1 ij j ik k
The sample covariance is denoted by s . Note that s equals s , the sample variance of
jk jj j
The sample correlation of variable j with variable k is s /jk s j koften denoted by r . jk
Sample covariance and correlation matrices:
The sample covariances may be arranged as the sample covariance matrix:
s11 s12 ··· s 1p
s21 s22 ··· s 2p
S = . . . .
. . . .
sp1 sp2 ··· s pp
1 ▯n ′
The sample covariance matrix can also be computed as S = n−1 (xi− x¯)(x i x¯) .
Similarly, the sample correlations may be arranged as the sample correlation matrix, some-
times denoted R (though the textbook also uses R for the population correlation matrix).
www.notesolution.com Linear Combinations of Random Variables
Deﬁne the random variable Y = a X +a X +1··1a X 2 w2ich can be pripten as Y = a X,
where a = [a ,1 ,.2.,a ] .p ′
Then one can show that E(Y ) = a µ and Var(Y ) = a Σa, where µ = E(X) and Σ is the
covariance matrix for X.
For a random vector of dimension q deﬁned as Y = AX, with A being a q × p matrix, one can
show that E(Y ) = Aµ and Var(Y ) = AΣA , where Var(Y ) is the covariance matrix of Y .
Similarly, if x is the i’th observed vector, and we de