Class Notes (1,100,000)
CA (620,000)
UTSG (50,000)
MAT (4,000)
MAT224H1 (100)
Lecture

Isomorphisms, Linear Operators, Eigenvalues and Eigenvectors


Department
Mathematics
Course Code
MAT224H1
Professor
Martin, Burda

This preview shows pages 1-2. to view the full 8 pages of the document.
Lecture 7notes byY. Burda
1Isomorphism of vector spaces
Wefeel that the vector space of quadratic polynomialsis “the same” as the
vector space of triples of their coefficients: instead of apolinomial a+bx+cx2
wecan record the coefficients a
b
c;if wewantto add twopolynomials, we
can instead add the triples oftheir coefficientandif wewantto multiply
apolynomial byanumber, wecan instead multiply its coefficients bythis
number. As far as vector space structure goes, this isenough in an abstract
vector space all wecan do is add vectors and multiply them byscalars. The
technical word for having vector spaces, whichare “the same” is having
“isomorphic vector spaces”.
Let’s try to formulate precisely when twovector spaces Vand Ware
isomorphic: wewantto associate to any vector of Vavector of Win such
awaythat the vector associated to v1+v2will bethe sum of the vectors
associated to v1and v2.Wealso wantthat the vector associated to λ·v
will bethe product of λand the vector associated to v.Moreover, wewant
exactly one vector of Vto beassociated to any vector in W.
Thus wearriveat the following definition:
Definition. Vectors spaces Vand Wover the same field Kare callediso-
morphic if thereexists an invertible linear transformation T:VW.Such
atransformation is calledan isomorphism of Vand W.
Example: Pn(K)is isomorphic to Kn+1 with the isomorphism T(a0+
a1x+. . . +anxn)=a0
.
.
.
an.
Wecan generalize this example and provethe following theorem:
Theorem. If dim V=n,then Vis isomorphic to Kn.
Proof. Choose abasis A=(v1,...,vn)of Vand let T:VKnbethe
transformation that sends eachvector to its coordinates relativeto A:T(v)=
[v]A.This transformation is linear (exercise) andis invertible: its inverse is
T1λ1
.
.
.
λn=λ1v1+. . . +λnvn.
1
www.notesolution.com

Only pages 1-2 are available for preview. Some parts have been intentionally blurred.

Example: construct an isomorphism of V={x
y
z=C|x+y+z=0}
with C2.Solution: let =(1
0
1,0
1
1)beabasis of V.Then the trans-
formation Tdescribed in the proof of the theorem abovesends the vec-
tor x
y
z=x1
0
1+y0
1
1to its coordinates x, yrelativeto the basis A:
Tx
y
z=(x
y).
It mightseem that T“forgets” the zcoordinate and thus isn’t invertible,
but it isn’t so: knowing the xand y-coordinates of avectorin V,its z-
coordinate can bereconstructed: z=xy.Thus the inverse of Tis
T1(x
y)=x
y
xy.
Wecan generalize the statementof the previous theorem slightly:
Theorem. Vector spaces Vand Wover the same field Kareisomorphic if
and only if dim V=dim W
Proof. Wecan use the first theorem weproved: if T:VKnis an iso-
morphism (n=dim V=dim W)and S:WKnis an isomorphism, then
S1T:VWis an isomorphism from Vto W.
Weprefer however to givean explicit construction of this isomorphism:
if (v1,...,vn)is abasis of Vand (w1,...,wn)is abasis of W,then the map
R:VW,R(λ1v1+. . .+λnvn)=λ1w1+. . .+λnwnis an isomorphism.
Example: find an isomorphism of V={t
s
rR2|t+2s+3r=0}with
W={a+bx +bx2|a, bR
Solution: let A=(2
1
0,3
0
1)beabasis for Vand let B=(1,x+x2)
beabasis of W.Let T:VWbethe transformation that sends the
vector t
s
rVwith coordinates s, rrelativeto the basis Ato the vector
s·1+r·(x+x2)Whaving thesame coordinates, but relativeto B:
Tt
s
r=s+rx+rx2.This transformation is an isomorphism, with the inverse
transformation sending s+rx+rx2backto s·2
1
0+r·3
0
1=2s3r
s
rV.
2Linear operators
We call alinear transformation from avector space to itself alinear operator:
T:VV.
2
www.notesolution.com
You're Reading a Preview

Unlock to view full version

Only pages 1-2 are available for preview. Some parts have been intentionally blurred.

If T,S:VVare linear operators, then wecan form linear operators
T+S,λT,TS,ST,T8,2I+(TS)5(where Istands for the identityoperator:
I(v)=vfor anyvV).
Given abasis Aof Vwesaythat the n×nsquare matrix [T]A,Ais the
matrix of Trelativeto the basis A(note that we choose the same basis in
the source and in the target vector space V).
If we choose adifferentbasis Bfor V,then
[T]B,B=[I]B,A[T]A,A[I]A,B
i.e. [T]B,B=P1[T]A,AP.
Wecall twomatrices with this propertysimilar or conjugate:
Definition. Two n×nmatrices Aand Bare calledsimilar if thereexists
an invertible matrix Psuch that B=P1AP .
Weshowed abovethat twomatrices are similar if and only if they are the
matrices of the same operator relativeto twodifferentbases.
One this wecan do with operators, whichwecan’t do with linear trans-
formations in general is compute their determinants:
Definition.
det T=det [T]A,A
whereAis any basis of V.
It seems that the number weget depends onthe choice of the basis A,but
in fact it doesn’t: ifBis adifferentbasis, then det [T]B,B=det([I]B,A[T]A,A[I]A,B).
Wecanuse the fact that the determinantis multiplicativeto write this as
det [I]B,Adet [T]A,Adet [I]A,B=det [T]A,A,as det [I]A,B=1/det [I]B,A.Thus
wehaveproved that det [T]B,B=det [T]A,A.
3Eigenvalues and eigenvectors
Westart with ametaphor that should explain whyeigenvalues and eigen-
vectors are useful. Suppose that weare observing amagician that keeps
performing thefollowing trick: in apile of wooden and steel balls he dou-
bles the number of wooden balls. It is extremely easy to predict howmany
wooden and steel balls there are going tobein the pile after 5such perfor-
mances, if weknowhowmanythere were initially the number of steel
3
www.notesolution.com
You're Reading a Preview

Unlock to view full version