Study Guides (275,808)
CA (151,008)
UW (7,618)
MATH (668)
MATH119 (4)

# MATH119 Study Guide - Absolute Convergence, Polynomial, Iterated Integral

17 Pages
397 Views
Winter 2013

Department
Mathematics
Course Code
MATH119
Professor
Eddie Dupont

This preview shows pages 1-3. Sign up to view the full 17 pages of the document.
MATH 119 - Calculus 2 for Engineering
Kevin Carruthers
Winter 2013
Appoximation Methods
Some methods (i.e. unintegratable ones) must be approximated, since we can not ﬁnd exact
solutions. There are two such methods for approximation: analytic and numerical.
For analytic approximation we make a simpliﬁcation using the theory of calculus to
recognize reasonable approximations, ie
sin x2=x2
for any small x.
Numerical approximation is the brute force approach. We refer to the deﬁnition of a
deﬁnite integral, and calculate the area of nrectangles of width x
n, and height determined
by our function.
Obviously, both of these methods can be useful. When using high-powered technology, the
numerical approach can reach near-perfection, but analytical methods can still be useful
to ﬁnd approximations without assigning ”random” variables or to determine whether a
numerical analysis is giving a realistic result.
Linear Approximation
Linear approximation is also known as Tangent Line Apprximation or Linearization. The
deﬁnition of a derivative is
f0(a) = lim
xa
f(x)f(a)
xa
For values of xnear a, the tangent line gives a reasonable approximation to our function.
The linear approximation near x=ais
L(x) = f(a) + f0(a)(xa)
1
This can be useful when the function is easy to evaluate at f(x), but diﬃcult to work with
at nearby points.
Note that this is similar to the diﬀerential approach f(a+ ∆x) = f(a) + ∆fwhere ∆f
f0(a)∆x.
When dealing with e, we can generalize our formula as with f(x) = aeb(x+c)we have
L(x) = a+ab(x+c)
Bisection Method
The most straight-forward approach is to use the Intermediate Value Theorem. Repeated
iterations of this will quickly approach the correct root.
Example: ﬁnd where x=ex.
We have f(x) = xexwhich is continuous. We see that it is negative at x= 0 and positive
at x= 1, so we know that our answer is between 0 and 1. Since f(.5) <0, we know that
our answer is between 0.5 and 1. We then repeat this ad inﬁnatum, until we have found a
precise enough value.
Newton’s Method
Newton’s Method is also know as the Newton-Raphson Procedure, and is based on a simple
concept: If we can’t solve f(x) = 0, solve L(x) = 0 instead.
Example: ﬁnd a root of x32x5 = 0.
With linear approximatinon we have
L(x) = f(x0) + f0(x0)(xx0)
and f(2) 0 and f0(x)=3x22 so
L(x) = 1 + 10(x2) = 10x21
gives us x= 2.1 We then take f(2.1) + f0(2.1)(x2.1) which leads to x= 2.09457 through
repetition.
More formally, Newton’s Method is deﬁned as
1. Pick x0
2. Do linear approximation at x0
3. Set approximation to zero, solve for x
4. Repeat with new x.
2
We can improve this method by calculating a general formula for the repetition step. This
is given by
xn+1 =xn+f(xn)
f0(xn)
With a good ﬁrst guess, this method can converge extremely quickly. If it fails to converge,
use bisection to improve your initial guess.
Fixed Point Iteration
A simpler alternative to Newton’s Method is to rewrite f(x) = 0 as x=g(x). Thus we ﬁnd
an approximate solution via
xn+1 =g(xn)
This converges slower than Newton’s Method, but is simpler to calculate.
Theorem: Convergence of Fixed-Point Iteration. Suppose that f(x) is deﬁned for all
xR, difderntiable everywhere, and has a bounded derivative at all points. If f(x) = x
has a solution, and if
f0
<1 for all values of xwithin some interval containing the ﬁxed
point, then the sequence generated by letting xn+1 =f(xn) will converge with any choice of
x0.
Polynomial Interpolation
Suppose we are given n+1 points (x, y) and we want to ﬁnd a polynomial of degree n passing
through them. We could either solve this with matrices, or we can use Newton’s Forward
Diﬀerence Formula.
With
myn= ∆m1yn+1 m1yn
we can reduce the system to one of size n1. A shorthand way to do this is to create a
column of y-values, then create a new n1 row of the diﬀereneces between each row, etc.
You should end up with a triangular shape.
By iterating through this method until we have an n= 1 system, we can solve for each of
the coeﬃcients by substituting them into the general polynomial. This will give us a general
solution which can then be used for any dataset. This solution is of the form
y=y0+xy0+... +x(x1)...(xn+ 1)ny0
n!
If we have non-unit spacing, this formula becomes
y=y0+xx0
hy0+... +(xx0)...(xxn1)
n!hnny0
3

#### Loved by over 2.2 million students

Over 90% improved by at least one letter grade.

OneClass has been such a huge help in my studies at UofT especially since I am a transfer student. OneClass is the study buddy I never had before and definitely gives me the extra push to get from a B to an A!

Leah — University of Toronto

Balancing social life With academics can be difficult, that is why I'm so glad that OneClass is out there where I can find the top notes for all of my classes. Now I can be the all-star student I want to be.

Saarim — University of Michigan

As a college student living on a college budget, I love how easy it is to earn gift cards just by submitting my notes.

Jenna — University of Wisconsin

OneClass has allowed me to catch up with my most difficult course! #lifesaver

Anne — University of California
Description
MATH 119 - Calculus 2 for Engineering Kevin Carruthers Winter 2013 Appoximation Methods Some methods (i.e. unintegratable ones) must be approximated, since we can not ▯nd exact solutions. There are two such methods for approximation: analytic and numerical. For analytic approximation we make a simpli▯cation using the theory of calculus to recognize reasonable approximations, ie 2 2 sinx = x for any small x. Numerical approximation is the brute force approach. We refer to the de▯nition of a x de▯nite integral, and calculate the area of n rectangles of widtn, and height determined by our function. Obviously, both of these methods can be useful. When using high-powered technology, the numerical approach can reach near-perfection, but analytical methods can still be useful to ▯nd approximations without assigning "random" variables or to determine whether a numerical analysis is giving a realistic result. Linear Approximation Linear approximation is also known as Tangent Line Apprximation or Linearization. The de▯nition of a derivative is f(x) ▯ f(a) f (a) = lim x!a x ▯ a For values of x near a, the tangent line gives a reasonable approximation to our function. The linear approximation near x = a is 0 L(x) = f(a) + f (a)(x ▯ a) 1 This can be useful when the function is easy to evaluate at f(x), but di▯cult to work with at nearby points. Note that this is similar to the di▯erential approach f(a + ▯x) = f(a) + ▯f where ▯f ▯ 0 f (a)▯x. b(x+c) When dealing with e, we can generalize our formula as with f(x) = ae we have L(x) = a + ab(x + c) Bisection Method The most straight-forward approach is to use the Intermediate Value Theorem. Repeated iterations of this will quickly approach the correct root. ▯x Example: ▯nd where x = e . We have f(x) = x▯e ▯x which is continuous. We see that it is negative at x = 0 and positive at x = 1, so we know that our answer is between 0 and 1. Since f(:5) < 0, we know that our answer is between 0.5 and 1. We then repeat this ad in▯natum, until we have found a precise enough value. Newton’s Method Newton’s Method is also know as the Newton-Raphson Procedure, and is based on a simple concept: If we can’t solve f(x) = 0, solve L(x) = 0 instead. Example: ▯nd a root of x ▯ 2x ▯ 5 = 0. With linear approximatinon we have 0 L(x) = f(x )0+ f (x )(0 ▯ x ) 0 0 2 and f(2) ▯ 0 and f (x) = 3x ▯ 2 so L(x) = ▯1 + 10(x ▯ 2) = 10x ▯ 21 0 gives us x = 2:1 We then take f(2:1) + f (2:1)(x ▯ 2:1) which leads to x = 2:09457 through repetition. More formally, Newton’s Method is de▯ned as 1. Pick x 0 2. Do linear approximation at x 0 3. Set approximation to zero, solve for x 4. Repeat with new x. 2 We can improve this method by calculating a general formula for the repetition step. This is given by f(x ) xn+1 = x n n f (xn) With a good ▯rst guess, this method can converge extremely quickly. If it fails to converge, use bisection to improve your initial guess. Fixed Point Iteration A simpler alternative to Newton’s Method is to rewrite f(x) = 0 as x = g(x). Thus we ▯nd an approximate solution via x = g(x ) n+1 n This converges slower than Newton’s Method, but is simpler to calculate. Theorem: Convergence of Fixed-Point Iteration. Suppose that f(x) is de▯ned for all x 2 R, difderntiable everywhere, and has a bounded derivative at all points. If f(x) = x ▯ ▯ has a solution, and if f < 1 for all values of x within some interval containing the ▯xed point, then the sequence generated by letting xn+1 = f(x n will converge with any choice of x0. Polynomial Interpolation Suppose we are given n+1 points (x;y) and we want to ▯nd a polynomial of degree n passing through them. We could either solve this with matrices, or we can use Newton’s Forward Di▯erence Formula. With m m▯1 m▯1 ▯ y n ▯ yn+1 ▯ ▯ yn we can reduce the system to one of size n ▯ 1. A shorthand way to do this is to create a column of y-values, then create a new n ▯ 1 row of the di▯ereneces between each row, etc. You should end up with a triangular shape. By iterating through this method until we have an n = 1 system, we can solve for each of the coe▯cients by substituting them into the general polynomial. This will give us a general solution which can then be used for any dataset. This solution is of the form n ▯ y 0 y = y0+ x▯y + 0:: + x(x ▯ 1):::(x ▯ n + 1) n! If we have non-unit spacing, this formula becomes x ▯ x 0 (x ▯ x 0:::(x ▯ xn▯1) n y = y0+ h ▯y 0 ::: + n!h n ▯ y 0 3 Note that this is mostly a generalized version, and you may assume equal unit spacing by xz= z and h = 1. Also note theat x n x +0nh where h = ▯x. If we have both non-unit and non-equal spacing, we use Newton’s Divided Di▯erences, which is generalized from m▯1 m▯1 m ▯ f(x)n+1 ▯ ▯ f(x)n ▯ f(x) n x ▯ x n+1 n Linear Interpolation High-order polynomials are known to be innaccurate and oscillate wildly at each end. Based on this, we may sometimes wish to avoid calculating such polynomials. We can use Linear Interpolation for this, by simply using the closest two points to the value we are approxi- mating. The Lagrange Linear Interpolation Formula is ▯ ▯ ▯ ▯ x ▯ x1 x ▯ x 0 f(x) ▯ f(x 0 + f(x1) x0▯ x 1 x1▯ x 0 Taylor Polynomials Taylor Polynomials are basically an extended version of the Linear Approximation formula given more than two points. This allows us to be (normally) more accurate, though high- order Taylor Polynomials completely break down. Note that the ▯rst-order Taylor Polyno- mial is equivalent to the Linear Approximation. The nth order Taylor Polynomial is 2 n 0 (x ▯ x0) 00 (x ▯ x0) n0 P n;0(x) = f(x0) + (x ▯ x0)f (1 ) + f (x2) + ::: + f (x n 2! n! More generally, we have n k X f (x0) k P n;0(x) = (x ▯ x0) k=0 k! Note that using MacLaurin’s Approach we can derive this polynomial and prove that any Taylor Polynomial is unique. Thus if we ever ▯nd a polynomial which matches the values of f and its ▯rst n derivatives at0x , this polynomial must be a Taylor Polynomial, regardless of how we obtained it. Since MacLaurin derived Taylor Polynomials centered at 0, we refer to such a polynomial as a MacLaurin Polynomial, which has the form Xn k f (0) k P n;0x) = 2 x k=0 k 4 Taylor’s Theorem with Integral Remainders It’s important to▯determine how accu▯ate our approximations are. We can ▯nd the magnitude of the error as f(x) ▯ P n;x(x 0 , but since we do not know the value of f(x), we cannot 0 calculate this exactly. As such, we’ll ▯nd the upper bound of the error. If f(x) has n + 1 derivatives at x ,0then Xn f(x) = f (x 0(x ▯ x )0+ R n;x(x) 0 k=0 where Z x n (x ▯ t) n+1 Rn;x0(x) = n! f (t) ▯ dt x0 Unfortunately, we can’t evaluate this! As such, we will ▯nd an upper bound for the error, which may or may not be approximately equal to the error. If we can bound ▯ n+1 ▯ ▯f (t) ▯ K for all t between x and x t0en we can ▯nd Taylor’s Inequality by ▯ ▯ E = f(t) ▯ P n;x0(x) ▯ ▯ = R (xn ▯Z x(x ▯ t) n ▯ = ▯ f n+1(t) ▯ dt x0 n! Z x n ▯(x ▯ t) n+1 ▯ ▯ n! f (t) ▯ dt x0▯ ▯ Z x▯x ▯ t▯n ▯ ▯ ▯ ▯fn+1 (t) ▯ dt x0 n! Z ▯ ▯n x ▯x ▯ t▯ ▯ K ▯ dt x0 n! ▯ ▯n+1 ▯x ▯x ▯ t▯ ▯ ▯ K ▯ (n + 1)! x0 ▯ ▯+1 x ▯ x 0 ▯ K ▯ (n + 1▯! ▯ ▯ ▯x ▯ x ▯+1 E = R (x) ▯ K▯ 0 n (n + 1)! Approximation of Integrals with Taylor Polynomials When we’re dealing with integrals, Zt turns out we can use subsitution to simplify our work. x 2 FOr example, given the integral e ▯dt, we can let u = t and ▯nd the Taylor Polynomial 0 5 Z x 2 2 2 4 t2 for that. P2;0u) = 1+u+u , so P 2;0(t ) = 1+t +t . Thus we can approximate e ▯dt = Z x 0 2 4 1 + t + t ▯ dt which is easy to evaluate. 0 3 We can introduce error into this as e = P 2;0u) + R 2u) where R is given by R (2) ▯ K juj ▯ 3 ▯ u ▯ 3 ▯ q 3! where f (q) ▯ K for any q between 0 and u. Since f(u) = e , we have f (q) = e . We must bound this function, so we chose values approximately close to our desired answer. In this case, if we want to have x = :5, we must have u 2 [0;:25]. ▯ ▯ To ▯nd an upper bound for this, we have f (u) = e ▯ e u :25< 2. This can be used for ▯ ▯ 3 our value of K, thus giving us R (u) ▯ 2 juj on our interval. With substitution, we get ▯ ▯ 1 6 2 3! 1 7 ▯R 2u) ▯ t 3nd our absolute error is less than 27x . Z x 2 x3 x5 x7 In summary, we have found e ▯ dt = x + + ▯ for x 2 [▯ ; ]. Given some 0 3 10 21 2 2 speci▯c value fo x, we can ▯nd this value numerically. In▯nite Series Assume we take the limit of some error term. If this limit approaches zero, we can see that by adding more terms to our polynomial, we increase the accuracy of our approximation to perfection. Based on this, we can see that some functions can expressed as in▯nite sums, which are technically the limit of sums, not a sum itself. Xn (▯1) x 2k+1 For example, we have sinx = + R (x). Taking limits gives us sinx = (2k + 1)! 2n+1 k=0 X1 (▯1) x 2k+1 . This is refered to as the Taylor Series centered at zero of sinx, or the (2k + 1)! k=0 MacLaurian Series of sinx. n X (f) x0 Generally, since f(x) = (x ▯ x0) + R (n), if the remainder approaches zero we k! 1 k=0 X (f) x 0 k have f(x) = (x ▯ x0) . k! k=0 1 For some functions (example: 1+x ), this is only applicable on certain (between 0 and 1) X1 intervals. With some work, we can see that 1 = (▯1) x for x 2 (0;1), but this only 1+x k=0 gives us a partial answer, and not easily at that! 6 Convergence of In▯nite Series De▯nition: An in▯nite series of conskants a is de▯ned as 1 n X X ak= lim ak k=0 n!1 k=0 In other words, given a sequence of numbkrs a , we can construct the sequence of partial sums sn(i.e. 0 0a + 1 0a +1a + 2 ;:::) If this sequence conven!1ssn= s), then X we say that the seriesakconverges, and its sum is s. Otherwise, it diverges. k=0 Note that we can also start at some k 6= 0, as this will not a▯ect whether the series converges or not. However, it will a▯ect the value of the sum. Determining Convergence Geometric Series X1 A geometric series has the formr = a + ar + ar + ::: We can rede▯ne any geometric k=0 series with the equality X1 a(1 ▯ r ) ar = lim n!1 1 ▯ r k=0 ▯ ▯ ▯ ▯ For any r < 1 the sequence converges, otherwise it will diverge. We can thus conclude X1 a ▯ ▯ ar = if r < 1. 1 ▯ r k=0 In case of the series having the wrong index to use this formula easily, we have two options: we can either reindex the equation (which is a useful but tedious skill), or think of our a as the "▯rst term" and r as the "common ratio", and simply calculate those values. We also note that a series can diverge even kf lim a = 0. For example, the in▯nite series k!1 of 1diverges. Aside: this series is known as the harmonic series, and all harmonic series k diverge. Divergence Test P P Since akcan only converge if lim = 0, we can say that kf lim a 6= akdiverges. k!1 k!1 We can use this test to determine whether a series will diverge, but not whether it will converge (i.e. the converse may or may not be true). 7 Integral Test X1 Z 1 a converges if and only if f(x) ▯ dx converges, where f(x) = a > 0. For this test, k k k=k0 k0 we must chose f carefully: it must be continuous and positive, with f ! 1 as x ! 1. P-Series P 1 k converges if p > 1 and otherwise diverges. Note that the harmonic series is a (diverging) p-series. Comparison Test P P SuppPse we are given a Perieak. If we can identify a secPnd sebksuch that P ▯ k and bkconverges, then akalso converges. Ifka ▯ k and bkdiverges, then ak diverges as well. P Note: bkneeds to be a series whose behaviour we understand, and is usually a geometric or p-series. P P P Examples: ln > 1, so both diverge. 21 < 12, so both converge. k k k +2 k Limit Comparison Test ak P P If lim = L, where L is a constant akd a ▯ 0, theakand bkeither both converge k!1 bk or both diverge. P Example: for 1 , we can’t use the comparison test. With this test, we see that L = 1, k ▯1 P P and so they both converge. Similarly,1 diverges, since p diverges and L = . 3k+2 k 3 Alternating Series Test (Leibniz Test) P k Consider a series(▯1) akwith terms a0▯ a1+ a2▯ a 3::. If limka = 0 and the series is k!1 eventually decreasing, then the series converges. P k Example: (p1) converges, despite being quite similar to a diverging p-series. k Absolute Convergence vs Conditional Convergence P ▯ ▯ A converging series is only absolutely convergentakf also converges, otherwise it is conditionally convergent. 8 Aside: if you re-order the sums of a conditionally converging series, you can make it converge to a di▯erent sum! Ratio Test ▯ak+1 ▯ Suppose lim ▯ ▯= L. If L < 1, then the series is absolutely convergent. If L > 1, then k!1 ak the series is divergent. If L = 1, then the test fails and we must use another. Note that this test is usually all we need to determine the kind of series a given Taylor Series is, and that ▯nding L = 1 is more or less the only reason we would need another test. ▯ ▯ 1 A subset of this test is the root test, which uses the limit lim a ▯ k▯ k. This test is useful k!1 when everything appears raised to the power of k, but that is a rare structure to encounter and we can mostly ignore this test. Power Series A power series is the general form of the Tayl
More Less

Only pages 1-3 are available for preview. Some parts have been intentionally blurred.

Unlock Document

Unlock to view full version

Unlock Document

# You've reached the limit of 4 previews this month

Create an account for unlimited previews.

Notes
Practice
Earn
Me

OR

Don't have an account?

Join OneClass

Access over 10 million pages of study
documents for 1.3 million courses.

Join to view

OR

By registering, I agree to the Terms and Privacy Policies
Just a few more details

So we can recommend you notes for your school.