Study Guides (299,386)
CA (140,930)
UW (6,488)
MATH (498)

MATH119 Study Guide - Absolute Convergence, Polynomial, Iterated Integral

17 pages50 viewsWinter 2013

Department
Mathematics
Course Code
MATH119
Professor
Eddie Dupont

This preview shows pages 1-3. to view the full 17 pages of the document.
MATH 119 - Calculus 2 for Engineering
Kevin Carruthers
Winter 2013
Appoximation Methods
Some methods (i.e. unintegratable ones) must be approximated, since we can not find exact
solutions. There are two such methods for approximation: analytic and numerical.
For analytic approximation we make a simplification using the theory of calculus to
recognize reasonable approximations, ie
sin x2=x2
for any small x.
Numerical approximation is the brute force approach. We refer to the definition of a
definite integral, and calculate the area of nrectangles of width x
n, and height determined
by our function.
Obviously, both of these methods can be useful. When using high-powered technology, the
numerical approach can reach near-perfection, but analytical methods can still be useful
to find approximations without assigning ”random” variables or to determine whether a
numerical analysis is giving a realistic result.
Linear Approximation
Linear approximation is also known as Tangent Line Apprximation or Linearization. The
definition of a derivative is
f0(a) = lim
xa
f(x)f(a)
xa
For values of xnear a, the tangent line gives a reasonable approximation to our function.
The linear approximation near x=ais
L(x) = f(a) + f0(a)(xa)
1
You're Reading a Preview

Unlock to view full version

Only half of the first page are available for preview. Some parts have been intentionally blurred.

This can be useful when the function is easy to evaluate at f(x), but difficult to work with
at nearby points.
Note that this is similar to the differential approach f(a+ ∆x) = f(a) + ∆fwhere ∆f
f0(a)∆x.
When dealing with e, we can generalize our formula as with f(x) = aeb(x+c)we have
L(x) = a+ab(x+c)
Bisection Method
The most straight-forward approach is to use the Intermediate Value Theorem. Repeated
iterations of this will quickly approach the correct root.
Example: find where x=ex.
We have f(x) = xexwhich is continuous. We see that it is negative at x= 0 and positive
at x= 1, so we know that our answer is between 0 and 1. Since f(.5) <0, we know that
our answer is between 0.5 and 1. We then repeat this ad infinatum, until we have found a
precise enough value.
Newton’s Method
Newton’s Method is also know as the Newton-Raphson Procedure, and is based on a simple
concept: If we can’t solve f(x) = 0, solve L(x) = 0 instead.
Example: find a root of x32x5 = 0.
With linear approximatinon we have
L(x) = f(x0) + f0(x0)(xx0)
and f(2) 0 and f0(x)=3x22 so
L(x) = 1 + 10(x2) = 10x21
gives us x= 2.1 We then take f(2.1) + f0(2.1)(x2.1) which leads to x= 2.09457 through
repetition.
More formally, Newton’s Method is defined as
1. Pick x0
2. Do linear approximation at x0
3. Set approximation to zero, solve for x
4. Repeat with new x.
2
You're Reading a Preview

Unlock to view full version

Only half of the first page are available for preview. Some parts have been intentionally blurred.

We can improve this method by calculating a general formula for the repetition step. This
is given by
xn+1 =xn+f(xn)
f0(xn)
With a good first guess, this method can converge extremely quickly. If it fails to converge,
use bisection to improve your initial guess.
Fixed Point Iteration
A simpler alternative to Newton’s Method is to rewrite f(x) = 0 as x=g(x). Thus we find
an approximate solution via
xn+1 =g(xn)
This converges slower than Newton’s Method, but is simpler to calculate.
Theorem: Convergence of Fixed-Point Iteration. Suppose that f(x) is defined for all
xR, difderntiable everywhere, and has a bounded derivative at all points. If f(x) = x
has a solution, and if
f0
<1 for all values of xwithin some interval containing the fixed
point, then the sequence generated by letting xn+1 =f(xn) will converge with any choice of
x0.
Polynomial Interpolation
Suppose we are given n+1 points (x, y) and we want to find a polynomial of degree n passing
through them. We could either solve this with matrices, or we can use Newton’s Forward
Difference Formula.
With
myn= ∆m1yn+1 m1yn
we can reduce the system to one of size n1. A shorthand way to do this is to create a
column of y-values, then create a new n1 row of the differeneces between each row, etc.
You should end up with a triangular shape.
By iterating through this method until we have an n= 1 system, we can solve for each of
the coefficients by substituting them into the general polynomial. This will give us a general
solution which can then be used for any dataset. This solution is of the form
y=y0+xy0+... +x(x1)...(xn+ 1)ny0
n!
If we have non-unit spacing, this formula becomes
y=y0+xx0
hy0+... +(xx0)...(xxn1)
n!hnny0
3
You're Reading a Preview

Unlock to view full version


Loved by over 2.2 million students

Over 90% improved by at least one letter grade.