# MATH119 Study Guide - Absolute Convergence, Polynomial, Iterated Integral

17 pages50 viewsWinter 2013

This

**preview**shows pages 1-3. to view the full**17 pages of the document.**MATH 119 - Calculus 2 for Engineering

Kevin Carruthers

Winter 2013

Appoximation Methods

Some methods (i.e. unintegratable ones) must be approximated, since we can not ﬁnd exact

solutions. There are two such methods for approximation: analytic and numerical.

For analytic approximation we make a simpliﬁcation using the theory of calculus to

recognize reasonable approximations, ie

sin x2=x2

for any small x.

Numerical approximation is the brute force approach. We refer to the deﬁnition of a

deﬁnite integral, and calculate the area of nrectangles of width x

n, and height determined

by our function.

Obviously, both of these methods can be useful. When using high-powered technology, the

numerical approach can reach near-perfection, but analytical methods can still be useful

to ﬁnd approximations without assigning ”random” variables or to determine whether a

numerical analysis is giving a realistic result.

Linear Approximation

Linear approximation is also known as Tangent Line Apprximation or Linearization. The

deﬁnition of a derivative is

f0(a) = lim

x→a

f(x)−f(a)

x−a

For values of xnear a, the tangent line gives a reasonable approximation to our function.

The linear approximation near x=ais

L(x) = f(a) + f0(a)(x−a)

1

###### You're Reading a Preview

Unlock to view full version

Only half of the first page are available for preview. Some parts have been intentionally blurred.

This can be useful when the function is easy to evaluate at f(x), but diﬃcult to work with

at nearby points.

Note that this is similar to the diﬀerential approach f(a+ ∆x) = f(a) + ∆fwhere ∆f≈

f0(a)∆x.

When dealing with e, we can generalize our formula as with f(x) = aeb(x+c)we have

L(x) = a+ab(x+c)

Bisection Method

The most straight-forward approach is to use the Intermediate Value Theorem. Repeated

iterations of this will quickly approach the correct root.

Example: ﬁnd where x=e−x.

We have f(x) = x−e−xwhich is continuous. We see that it is negative at x= 0 and positive

at x= 1, so we know that our answer is between 0 and 1. Since f(.5) <0, we know that

our answer is between 0.5 and 1. We then repeat this ad inﬁnatum, until we have found a

precise enough value.

Newton’s Method

Newton’s Method is also know as the Newton-Raphson Procedure, and is based on a simple

concept: If we can’t solve f(x) = 0, solve L(x) = 0 instead.

Example: ﬁnd a root of x3−2x−5 = 0.

With linear approximatinon we have

L(x) = f(x0) + f0(x0)(x−x0)

and f(2) ≈0 and f0(x)=3x2−2 so

L(x) = −1 + 10(x−2) = 10x−21

gives us x= 2.1 We then take f(2.1) + f0(2.1)(x−2.1) which leads to x= 2.09457 through

repetition.

More formally, Newton’s Method is deﬁned as

1. Pick x0

2. Do linear approximation at x0

3. Set approximation to zero, solve for x

4. Repeat with new x.

2

###### You're Reading a Preview

Unlock to view full version

Only half of the first page are available for preview. Some parts have been intentionally blurred.

We can improve this method by calculating a general formula for the repetition step. This

is given by

xn+1 =xn+f(xn)

f0(xn)

With a good ﬁrst guess, this method can converge extremely quickly. If it fails to converge,

use bisection to improve your initial guess.

Fixed Point Iteration

A simpler alternative to Newton’s Method is to rewrite f(x) = 0 as x=g(x). Thus we ﬁnd

an approximate solution via

xn+1 =g(xn)

This converges slower than Newton’s Method, but is simpler to calculate.

Theorem: Convergence of Fixed-Point Iteration. Suppose that f(x) is deﬁned for all

x∈R, difderntiable everywhere, and has a bounded derivative at all points. If f(x) = x

has a solution, and if

f0

<1 for all values of xwithin some interval containing the ﬁxed

point, then the sequence generated by letting xn+1 =f(xn) will converge with any choice of

x0.

Polynomial Interpolation

Suppose we are given n+1 points (x, y) and we want to ﬁnd a polynomial of degree n passing

through them. We could either solve this with matrices, or we can use Newton’s Forward

Diﬀerence Formula.

With

∆myn= ∆m−1yn+1 −∆m−1yn

we can reduce the system to one of size n−1. A shorthand way to do this is to create a

column of y-values, then create a new n−1 row of the diﬀereneces between each row, etc.

You should end up with a triangular shape.

By iterating through this method until we have an n= 1 system, we can solve for each of

the coeﬃcients by substituting them into the general polynomial. This will give us a general

solution which can then be used for any dataset. This solution is of the form

y=y0+x∆y0+... +x(x−1)...(x−n+ 1)∆ny0

n!

If we have non-unit spacing, this formula becomes

y=y0+x−x0

h∆y0+... +(x−x0)...(x−xn−1)

n!hn∆ny0

3

###### You're Reading a Preview

Unlock to view full version

#### Loved by over 2.2 million students

Over 90% improved by at least one letter grade.