Course Summary Math 211
table of contents
I. Functions of several variables.
II. R .
IV. Taylor’s Theorem.
V. Di▯erential Geometry.
1. Best a▯ne approximations.
3. Lagrange multipliers.
4. Conservation of energy.
I. Functions of several variables.
De▯nition 1.1. Let S and T be sets. The Cartesian product of S and T is the set of ordered
S ▯ T := f(s;t) j s 2 S;t 2 Tg:
De▯nition 1.2. Let S and T be sets. A function from S to T is a subset W of the Cartesian
product S ▯T such that: (i) for each s 2 S there is an element in W whose ▯rst component
is s, i.e., there is an element (s;t) 2 W for some t 2 T; and (ii) if (s;t) and (s;t ) are in W,
then t = t . Notation: if (s;t) 2 W, we write f(s) = t. The subset W, which is by de▯nition
the function f, is also called the graph of f.
De▯nition 1.3. Let f:S ! T be a function between sets S and T.
1. f is one-to-one or injective if f(x) = f(y) only if x = y.
2. The image or range of f is ff(s) 2 T j s 2 Sg. The image will be denoted by im(f)
3. f is onto if im(f) = T.
4. The domain of f is S and the codomain of f is T.
5. The inverse image of t 2 T is f ▯1 (t) := fs 2 S j f(s) = tg.
1 De▯nition 1.4. Let f:S ! T and g:T ! U. The composition of f and g is the function
g ▯ f:S ! U given by (g ▯ f)(s) := g(f(s)).
De▯nition 1.5. R is the Cartesian product of R with itself n times. We think of R as n
the set of ordered n-tuples of real numbers:
R := f(a ;1::;a )nj a 2iR;1 ▯ i ▯ ng:
The elements of R are called points or vectors.
De▯nition 1.6. A function of several variables is a function of the form f:S ! R m where
S ▯ R . Writing f(x) = (f (x1;:::;f (x)m, the function f :S i R, for each i = 1;:::;m, is
called the i-th component function of f.
De▯nition 1.7. Let f be a function of several variables, f:S ! R , with S ▯ R . If n = 1,
then f is a parametrized curve, if n = 2, then f is a parametrized surface. In general, we say
f is a parametrized n-surface.
De▯nition 1.8. A vector ▯eld is a function of the form f:S ! R where S ▯ R .
De▯nition 1.9. If f:S ! R with S ▯ R , a level set of f is the inverse image of a point in
R. A drawing showing several level sets is called a contour diagram for f.
II. R .
De▯nition 2.1. The i-th coordinate of a = (a ;::1;a ) 2 n is a . Forii = 1;:::;n, de▯ne
the i-th standard basis vector for R to be the vector e whose coordinates are all zero except
the i-th coordinate, which is 1.
De▯nition 2.2. The additive inverse of a = (a ;:::1a ) 2 n n is the vector ▯a :=
(▯a 1:::;▯a )n
De▯nition 2.3. In R , de▯ne 0 := (0;:::;0), the vector whose coordinates are all 0.
De▯nition 2.4. (Linear structure on R .) If a = (a ;::1;a ) and b = (b ;:::1b ) arn points
in R and s 2 R, de▯ne
a + b = (a 1:::;a n + (b 1:::;b n := (a + b1;:::1a + b )n n
sa = s(a 1:::;a n := (sa ;::1;sa ): n
The point a + b is the translation of a by b (or of b by a) and sa is the dilation of a by a
factor of s. De▯ne a ▯ b := a + (▯b).
2 metric structure.
n n n
De▯nition 2.5. The dot product on R is the function R ▯ R ! R given by
(1 ;:::;n ) ▯1(b ;:n:;b ) :ai i:
The dot product is also called the inner product or scalar product. If a;b 2 R , the dot
product is denoted by a ▯ b, as above, or sometimes by (a;b) or ha;bi.
De▯nition 2.6. The norm or length of a vector a =1(a ;:n:;a ) 2 R is
p t X 2
jaj := a ▯ a = ai:
The norm can also be denoted by jjajj.
De▯nition 2.7. The vector a 2 R is a unit vector if jaj = 1.
De▯nition 2.8. Let p 2 R and r 2 R.
1. The open ball of radius r centered at p is the set
Br(p) := fa 2 R j ja ▯ pj < rg:
2. The closed ball of radius r centered at p is the set
Br(p) := fa 2 R j ja ▯ pj ▯ rg:
3. The sphere of radius r centered at p is the set
Sr(p) := fa 2 R j ja ▯ pj = rg:
De▯nition 2.9. The distance between a = 1a ;::n;a ) and b 1 (b ;n::;b ) in R is
d(a;b) := ja ▯ bj = (ai▯ bi) :
De▯nition 2.10. Points a;b 2 R are perpendicular or orthogonal if a ▯ b = 0.
De▯nition 2.11. Suppose a;b are nonzero vectors in R . The angle between them is de▯ned
to be cos1jajjbj
3 De▯nition 2.12. Let a;b 2 R with b 6= 0. The component of a along b is the scalar
c := b▯b= jbj. The projection of a along b is the vector cb where c is the component of a
De▯nition 2.13. A nonempty subset W ▯ R is a linear subspace if it is closed under
vector addition and scalar multiplication. This means that: (i) if a;b 2 W then a + b 2 W,
and (ii) if a 2 W and s 2 R, then sa 2 W.
De▯nition 2.14. A vector v 2 R is a linear combination of vectors v ;:1:;v 2 k if there
are scalars 1 ;:::;ak2 R such that v = i=1ai i.
De▯nition 2.15. A subspace W ▯ R is spanned by a subset S ▯ R if every element of
W can be written as a linear combination of elements of S. If W is spanned by S, we write
span(S) = W.
De▯nition 2.16. The dimension of a linear subspace W ▯ R is the smallest number of
vectors needed to span W.
De▯nition 2.17. Let W be a subset of R and let p 2 R . The set
p + W := fp + w j w 2 Wg
is called the translation of W by p. An a▯ne subspace of R is any subset of the form p+W
where W is a linear subspace of R . In this case, the dimension of the a▯ne subspace is
de▯ned to be the dimension of W.
De▯nition 2.18 A k-plane in R is an a▯ne subspace of dimension k. A line is a 1-plane,
and a hyperplane is a (n ▯ 1)-plane.
De▯nition 2.19. A function L:R ! Rn m is a linear function (or transformation or map) if
it preserves vector addition and scalar multiplication. This means that for all a;b 2 R and
for all s 2 R,
1. L(a + b) = L(a) + L(b);
2. L(sa) = sL(a).
De▯nition 2.20. (Linear structure on the space of linear functions.) Let L and M be linear
functions with domain R and codomain R . m
1. De▯ne the linear function L + M:R ! R by
(L + M)(v) := L(v) + M(v)
for all v 2 R .
4 2. If s 2 R, de▯ne the linear function sL:R ! R m by
(sL)(v) := L(sv)
for all v 2 R .
De▯nition 2.21. A function f:R ! R is an a▯ne function (or transformation or map)
if it is the ‘translation’ of a linear function. This means that there is a linear funtion
L:R ! R m and a point p 2 R m such that f(v) = p + L(v) for all v 2 R .
De▯nition 2.22. Let W be a k-dimensional a▯ne subspace of R . A parametric equation
for W is any a▯ne function f:R ! R whose image is W.
De▯nition 2.23. An m▯n matrix is a rectangular block of real numbers with m rows and
n columns. The real number appearing in the i-th row and j-th column is called the i;j-th
entry of the matrix. We write A = (a ) for the matrix whose i;j-th entry is a .
De▯nition 2.24. (Linear structure on matrices.) Let A = (a ) anijB = (b ) be mij n
matrices. De▯ne A + B := (a +ij ).ijf s 2 R, de▯ne sA := (sa ).ij
De▯nition 2.25. (Multiplication of matrices.) Let A = (a ) ij an m ▯ k matrix, and let
B = (b ijbe a k ▯ n matrix. De▯ne the product, AB to be the m ▯ n matrix whose i;j-th
entry is ‘=1ai‘ ‘j
De▯nition 2.26. Let A = (a ) ij an m▯n matrix. The linear function determined by (or
associated with) A is the function L :R ! R m such that
P n P n
L(x 1:::;x n = ( a1j j:::; amjxj):
De▯nition 2.27. Let L:R n ! R m be a linear function. The matrix determined by (or
associated with) L is the m ▯ n matrix whose i-th column is the image of the i-th standard
basis vector for R under L, i.e., L(i ).
De▯nition 2.28. An n▯n matrix, A, is invertible or nonsingular if there is an n▯n matrix
B such that AB = I whnre I is nhe identity matrix whose entries consist of 1s along the
diagonal and 0s otherwise. In this case, B is called the inverse of A and denoted A .
Theorem 2.1. Let a;b;c 2 R and s;t 2 R. Then
1. a + b = b + a.
2. (a + b) + c = a + (b + c).
3. 0 + a = a + 0 = a.
5 4. a + (▯a) = (▯a) + a = 0. ~
5. 1a = a and (▯1)a = ▯a.
6. (st)a = s(ta).
7. (s + t)a = sa + ta.
8. s(a + b) = sa + sb.
Theorem 2.2. Let a;b;c 2 R and s 2 R. Then
1. a ▯ b = b ▯ a.
2. a ▯ (b + c) = a ▯ b + a ▯ c.
3. (sa) ▯ b = s(a ▯ b).
4. a ▯ a ▯ 0.
5. a ▯ a = 0 if and only if a = 0.
Theorem 2.3. Let a;b 2 R and s 2 R. Then
1. jaj ▯ 0.
2. jaj = 0 if and only if a = 0.
3. jsaj = jsjjaj.
4. ja ▯ bj ▯ jajjbj (Cauchy-Schwartz inequality).
5. ja + bj ▯ jaj + jbj (triangle inequality).
Theorem 2.4. Let a;b 2 R be nonzero vectors. Then
a ▯ b
▯1 ▯ ▯ 1:
This shows that our de▯nition of angle makes sense.
Theorem 2.5. (Pythagorean theorem.) Let a;b 2 R . If a and b are perpendicular, then
jaj + jbj = ja + bj .
Theorem 2.6. Any linear subspace of R is spanned by a ▯nite subset.
6 Theorem 2.7. If a = (a ;1::;a )n6= 0 and p = (p 1:::;p n are elements of R , then
H := fx 2 R j (x ▯ p) ▯ a = 0g
is a hyperplane. In other words, the set of solutions, (1 ;:::;n ), to the equation1a1x +
▯▯▯ + n xn= d where d = i=1ai iis a hyperplane. Conversely, every hyperplane is the set
of solutions to an equation of this form.
Theorem 2.8. If L:R ! R m is a linear function and W ▯ R is a linear subspace, then
L(W) is a linear subspace of R .
Theorem 2.9. A linear map is determined by its action on the standard basis vectors. In
other words: if you know the images of the standard basis vectors, you know the image of
an arbitrary vector.
Theorem 2.10. The image of the linear map determined by a matrix is the span of the
columns of that matrix.
Theorem 2.11. Let W be a k-dimensional subspace of R spanned by vectors v ;:::;v1, k
and let p 2 R . Then a parametric equation for the a▯ne space p + W is
f:R ! R
(a1;:::;ak) 7! p + ai i:
Theorem 2.12. Let L be a linear function and let A be the matrix determined by L. Then
the linear map determined by A is L. (The converse also holds, switching the roles of L
Theorem 2.13. The linear structures on linear maps and on their associated matrices are
combatible: Let L and M be linear functions with associated matrices A and B, respectively,
and let s 2 R. Then the matrix associated with L + M is A + B, and the matrix associated
with sL is sA.
n k k m
Theorem 2.14. Let L:R ! R and M:R ! R be linear functions with associated
matrices A and B, respectively. Then the matrix associated with the composition, M ▯ L is
the product BA.
De▯nition 3.1. A subset U ▯ R is open if for each u 2 U there is a nonempty open ball
centered at u contained entirely in U: there exists a real number r > 0 such thrt B (u) ▯ U.
7 n n
De▯nition 3.2. A point u 2 R is a limit point of a subset S ▯ R if every open ball
centered at u, B (u), contains a points of S di▯erent from u.
De▯nition 3.3. Let f:S ! R m be a function with S ▯ R . Let s be a limit point of S. The
limit of f(x) as x approaches s is v 2 R if for all real numbers ▯ > 0, there is a real number
▯ > 0 such that 0 < jx ▯ sj < ▯ and x 2 S ) jf(x) ▯ vj < ▯. Notation: lim x!s f(x) = v.
De▯nition 3.4. Let f:S ! R m with S ▯ R , and let s 2 S. The function f is continuous
at s 2 S if for all real numbers ▯ > 0, there is a real number ▯ > 0 such that jx ▯ sj < ▯
and x 2 S ) jf(x) ▯ f(s)j < ▯. (Thus, f is continuous at a limit point s 2 S if and only if
limx!s f(x) = f(s) and f is automatically continuous at all points in S which are not limit
points of S.) The function f is continuous on S if it is continuous at each point of S.
De▯nition 3.5. Let f:U ! R with U an open subset of R , and let eibe the i-th standard
basis vector for R . The i-th partial of f at u 2 U is the vector in R
@f f(u + tei) ▯ f(u)
@x (u) := t!0 t
provided this limit exists.
De▯nition 3.6. Let f:U ! R with U an open subset of R . Let u 2 U, and let v 2 R be n
a unit vector. The directional derivative of f at u in the direction of v is the real number
f(u + tv) ▯ f(u)
fv(u) := lim
provided this limit exists. The directional derivative of f at u in the direction of an arbitrary
nonzero vector w is de▯ned to be the directional derivative of f at u in the direction of the
unit vector w=jwj.
De▯nition 3.7. Let f:U ! R m with U an open subset of R . Then f is di▯erentiable at
u 2 U if there is a linear function Dfu:R ! R such that
jf(u + h) ▯ f(u) ▯ Df (u)j
lim = 0:
The linear function Df u is then called the derivative of f at u. The notation f (u) is
sometimes used instead of Df . uhe function f is di▯erentiable on U if it is di▯erentiable
at each point of U.
De▯nition 3.8. Let f:U ! R m with U an open subset of R . The Jacobian matrix of f at
u 2 U is the m ▯ n matrix of partial derivatives of the component functions of f:
2 @f @f 3
▯ ▯ @x1(u) ::: @x1 (u)
@f i 6 . . n. 7
Jf(u) := (u) = 4 . .. . 5 :
@x j @fm @fm
@x1(u) ::: @xn (u)
8 1. The i-th column of the Jacobian matrix is the i-th partial derivative f at u and is
called the i-th principal tangent vector to f at u.
2. If n = 1, then f is a parametrized curve and the Jacobian matrix consists of a single
column. This column is the tangent vector to f at u or the velocity of f at u, and its
length is the speed of f at u. We write
0 0 0
f (u) = (1 (u);:::;m (t))
for this tangent vector.
3. If m = 1, the Jacobian matrix consists of a single row. This row is called the gradient
vector for f at u and denoted rf(u) or gradf(u):
rf(u) := (u);:::; (u) :
@x 1 @xn
Theorem 3.1. Let f:S ! R m and g:S ! R m where S is a subset of R .
1. The limit of a function is unique.
2. The limit, lix!s f(x), exists if and only if the corresponding limits for each of the
component functions, limx!s fi(x), exists. In that case,
x!sm f(x) = x!s 1 (x);:::x!simmf (x) :
3. De▯ne f+g:S ! R by (f+g)(x) := f(x)+g(x). If lim f(x) = a and lim g(x) =
x!s m x!s
b, then limx!s(f + g)(x) = a + b. Similarly, if t 2 R, de▯ne tf:U ! Rby (tf)(x) :=
t(f(x)). If lim f(x) = a, then lim (tf)(x) = ta.
4. If m = 1, de▯ne (fg)(x) := f(x)g(x) and (f=g)(x) := f(x)=g(x) (provided g(x) 6= 0).
If lim f(x) = a and lim g(x) = b, then lim (fg)(x) = ab and, if b 6= 0, then
x!s x!s x!s
limx!s (f=g)(x) = a=b.
5. If m = 1 and g(x) ▯ f(x) for all x, then limx!sg(x) ▯ lim x!sf(x) provided these
m m n
Theorem 3.2. Let f:S ! R and g:S ! R where S is a subset of R .
1. The function f is continuous if and only if the inverse image of every open subset of
R under f is the intersection of an open subset of R with S.
9 2. The function f is continuous at s if and only if each of its component functions is
continuous at s.
3. The composition of continuous functions is continuous.
4. The functions f + g and tf for t 2 R as above are continuous at s 2 S provided f and
g are continuous at s.
5. If m = 1 and f and g are continuous at s 2 S, then fg and f=g are continuous at s
(provided g(s) 6= 0 in the latter case).
6. A function whose coordinate functions are polynomials is continuous.
Theorem 3.3. If f:R ! R is a linear transformation, then f is di▯erentiable at each
p 2 R , and Df =pf.
Theorem 3.4. (The chain rule.) Let f:U ! R and g:V ! R where U is an open subset
of R and V is an open subset of R . Suppose that f(U) ▯ V so that we can form the
composition, g ▯f:U ! R . Suppose that f is di▯erentiable at p 2 U and g is di▯erentiable
at f(p); then g ▯ f is di▯erentiable at p, and
D(g ▯ f) p Dg f(p) Df p
In terms of Jacobian matrices, we have,
J(g ▯ f)(p) = Jg(f(p))Jf(p):
Theorem 3.5. Let f:U ! R m where U is an open subset of R . Then f is di▯erentiable at
p 2 U if and only if each component function f :i ! R is di▯erentiable, and in that case,
Df (v) = (Df (v);:::;Df (v)) for all v 2 R .
p 1p mp
Theorem 3.6. Let f:U ! R where U is an o