What is a differential equation?
A differential equation is any equation containing one or more derivatives.
The simplest differential equation, therefore, is just a usual integration
y′ = f (t).
Comment: The solution of the above is, of course, the indefinite integral of
f(t), y = F(t) + C, where F(t) is any antiderivative of f(t) and C is an
arbitrary constant. Such a solution is called a general solution of the
differential equation. It is really a set of infinitely many functions each
differ others by one (or more) constant term and/or constant coefficients.
Every differential equation, if it does have a solution, always has infinitely
many functions satisfying it. All of these solutions, which differ from one
another by one, or more, arbitrary constant / coefficient(s), are given by the
formula of the general solution.
© 2008, 2012 Zachary S Tseng A▯1 ▯ 1 Classification of Differential Equations
Ordinary vs. partial differential equations
An ordinary differential equation (ODE) is a differential equation
with a single independent variable, so the derivative(s) it contains are
all ordinary derivatives.
A partial differential equation (PDE) is a differential equation with
two or more independent variables, so the derivative(s) it contains are
Order of a differential equation
The order of a differential equation is equal to the order of the highest
derivative it contains.
5 2 −t
(1) y′ + y = t e (first order ODE)
(2) cos(t)y′ − sin(t) y = 3tcos(t) (first order ODE)
(3) y″ − 3y′ + 2y = e cos(5t) (second order ODE)
(4) y(4)+ (y′) = 0 (fourth order ODE)
(5) uxx 4u + tt t (second order PDE)
(6) y − (y″y′) + 2y = 4e (third order ODE)
© 2008, 2012 Zachary S Tseng A▯1 ▯ 2 Linear vs. nonlinear differential equations:
An n▯th order ordinary differential equation is called linear if it can be
written in the form:
(n) (n−1) (n−2)
y = a n−1(t)y + a n−2 (t)y + … + a (t1y′ + a (t)0 + g(t) .
Where the functions a’s and g are any functions of the independent
variable, t in this instance. Note that the independent variable could
appear in any shape or form in the equation, but the dependent
variable, y, and its derivatives can only appear alone, in the first
power, not in a denominator or inside another (transcendental)
function. In other words, the right▯hand side of the equation above
must be a linear function of the dependent variable y and its
derivatives. Otherwise, the equation said to be nonlinear.
In the examples above, (2) and (3) are linear equations, while (1), (4) and (6)
are nonlinear. (5) is a linear partial differential equation, as each of the
partial derivatives appears alone in the first power. The next example looks
similar to (3), but it is a (second order) nonlinear equation, instead. Why?
(7) y″ − 3y′ + 2y = e cos(5y)
© 2008, 2012 Zachary S Tseng A▯1 ▯ 3 Exercises A▯1.1:
1 – 9 Determine the order of each equation below. Also determine whether
each is a linear or nonlinear equation.
1. y′ + t y = cos(t )
2. y″′ + 11y″ − y′ + e −6ty = 2yln t
3. y″ = ty
4. 5y′ = t5y
5. y3 + sec(t) = y (6)y′
6. y″ + 5y′ + 4y = −e y
7. (y′) − y = 1
8. y″cos(y) = t sin(t) y (5)
t (4) t 6
9. e y + 3y″ − cot(e)y = 2t + y′
10. For what value(s) of n will thn fol2nwing equation be linear?
y′ − 9y = t sin(3nt)
1. 1st order, linear; 2. 3rd order, linear; 3. 2nd order, nonlinear;
4. 1st order, linear; 5. 6th order, nonlinear; 6. 2nd order, linear;
7. 1st order, nonlinear; 8. 5th order, nonlinear; 9. 4th order, linear;
10. When n = 0 or 1, the equation is linear.
© 2008, 2012 Zachary S Tseng A▯1 ▯ 4 Direction Field (or Slope Field)
Direction field is a simple visualization tool that could be used to study the
approximated behavior of the solutions of a first order differential equation
y′ = f (t, y) ,
without having to solve it first.
What is it? First draw a grid on the ty▯plane. Then for each point (t 0, 0 ) on
the grid compute the value f(t , y0). 0ote that f(t , y 0 = 0′ is actually the
instantaneous rate of change of a solution, y = φ(t), of the given equation at
the point (t 0 y0). It, therefore, represents the slope of the line tangent to the
solution whose curve passes through (t , y )0 at0the exact point. Draw a short
arrow at each such point (t , y ) that is pointing in the direction given by the
slope of the tangent line. After an arrow is drawn for every point of the grid,
we can do “connecting▯the▯dots” and trace curves byconnecting one arrow
to the next arrow in the grid where the first is pointing at. Those curves
traced this way are called integral curves (so called because, in effect, they
each approximates an antiderivative of the function f(t, y)). Each integral
curve approximates the behavior of a particular function that satisfies the
given differential equation. The collection of all integral curves
approximates the behaviors of the general solution of the equation.
Example: y′ = 2t
What we are doing is constructing the graphs of some functions that satisfy
the given differential equation by first approximating each solution
function’s local behavior at a point (t , y ) using its linearization (i.e. the
tangent line approximation). Then we obtain the longer▯term behavior by
connecting those local approximations, point▯by▯point moving among the
grid, into curves that are fairly accurately resemble the actual graphs of those
functions. We will look at this tool in more details in a later section, when
we study Autonomous Equations.
© 2008, 2012 Zachary S Tseng A▯1 ▯ 5 Figure 1. The direction field of y′ = 2t
© 2008, 2012 Zachary S Tseng A▯1 ▯ 6 Figure 2. The direction field of y′ = 2t (with a few integral curves
traced – approximating curves of the form y = t + C ).
© 2008, 2012 Zachary S Tseng A▯1 ▯ 7 Figure 3. Another example: the direction field of y′ = t − y
Comment: We will learn shortly how to solve this equatio−t The exact
solutions are functions of the form y = t – 1 + Ce . When C = 0, the
solution is just the line y = t – 1, which appears as the slant asymptote of all
other solutions in the above graph.
© 2008, 2012 Zachary S Tseng A▯1 ▯ 8 First Order Linear Differential Equations
A first order ordinary differential equation is linear if it can be written in the
y′ + p(t) y = g(t)
where p and g are arbitrary functions of t.
This is called the standard or canonical form of the first order linear
We’ll start by attempting to solve a couple of verysimple equations of such
© 2008, 2012 Zachary S Tseng A▯1 ▯ 9 Example: Find the general solution of the equation
y′ − 2y = 0.
First let’s rewrite the equation as = 2y .
Then, assuming y ≠ 0, divide both sides by y:
Multiply both sides by dt:
Now what we have here are two derivatives which are equal. It
implies (as a consequence of the Mean Value Theorem) that the anti▯
derivatives of the two sides must differ only by a constant of
integration. Integrate both sides:
ln| y| = 2t + C
(2t + C ) C 2t 2t
or, | y| = e = e e = C e1
Where C =1e C is an arbitrary, but always positive constant.
To simplify one step farther, we can drop the absolute value sign and
relax the restriction on C1. C1can now be any positive or negative
(but not zero) constant. Hence
y(t) = C 1 , C 1≠ 0. (1)
© 2008, 2012 Zachary S Tseng A▯1 ▯ 10 Lastly, what happens if our eariler assumption that y ≠ 0 is false?
Well, if y = 0 (that is, when y is the constant function zero), then y′ = 0
and the equation is reduced to
0 − 0 = 0
which is an expression that is always true. Therefore, the constant
zero function is also a solution of the given equation. Not exactly by
a coincident, it corresponds to the missing case of C 1 = 0 in (1).
As a result, the general solution is in the form
y(t) = Ce 2t, for any constant C .
That is, any function of this form, regardless of the value of C, will
satisfy the equation y′ − 2y = 0. While there are infinitely many such
functions, no other type of functions could satisfythe equation.
The similar technique could also be used to solve this next example.
© 2008, 2012 Zachary S Tseng A▯1 ▯ 11 Example: For arbitrary constants r and k, r ≠ 0, solve the equation
y′ − r y = k.
We will proceed as before to rewrite the equation into equality of two
derivatives. Then integrate both sides.
dy = ry k
Assuming ry + k ≠ 0:
= dt → ∫ = ∫t
ry k ry +k
Therefore, ln ry +k =t +C
ry + k = e rt+1
Simplifying: ln|ry + k| = rt + C1 →
→ ry + k = e e , wheree is an arbitrary positive constant.
Dropping the absolute value sign:
ry + k = C 2 , C 2 ±e 1is any nonzero constant.
y = 1(C e − k )= C 2 e − k
That is, r 2 r r .
© 2008, 2012 Zachary S Tseng A▯1 ▯ 12 Lastly, it can be easily checked that if ry + k = 0, implying that y is
the constant function − k , the given differential equation is again
satisfied. This constant solution corresponds to the above general
solution for the case C = 0. Hence, the general solution now
includes all possible values of the unknown arbitrary constant:
y = e − , C is any constant.
© 2008, 2012 Zachary S Tseng A▯1 ▯ 13 The Integrating Factor Method
In the previous examples of simple first order ODEs, we found the solutions
by algebraically separate the dependent variable▯ and the independent
variable▯ terms, and write the two sides of a given equation as derivatives,
each with respect to one of the two variables. Then just integrate both sides
and simplify to find the solution y. However, this process was feasible only
because the equations in question were a special type, namely that they were
both separable, in addition to being first order linear equations. They do,
however, illustrated the main goal of solving a first order ODE, namely to
use integration to removed the y′▯term.
Most first order linear ordinary differential equations are, however, not
separable. So the previous method will not work because we will be unable
to rewrite the equation to equate two derivatives. In such instances, a more
elaborate technique must be applied. How do we, then, integrate both sides?
Let’s look again at the first order linear differential equation we are
attempting to solve, in its standard form:
y′ + p(t) y = g(t) .
What we will do is to multiply the equation through by a suitably chosen
function ▯(t), such that the resulting equation
▯(t) y′ + ▯(t)p(t) y = ▯(t)g(t) (*)
would have integrate▯able expressions on both sides. Such a function ▯(t) is
called an integrating factor.
© 2008, 2012 Zachary S Tseng A▯1 ▯ 14 Comment: The idea of integrating factor is not really new. Recall how you
have integrated sec(x) in Math 141. The integral as given could not be
integrated. However, after the integrand has been multiplied by a suitable
from of 1, in this case (tan(x) + sec(x))/ (tan(x) + sec(x)), the integration
could then proceed quite easily.
sec xdx = sec x tan x + secx dx = sec xtan x + sec x dx = du
∫ ∫ tan x + secx ∫ sec x + tan x ∫ u
= ln u +C = ln secx + tan x +C
Now back to the equation
▯(t) y′ + ▯(t)p(t) y = ▯(t)g(t) (*)
On the right side there is explicitly a function of t. So it could always, in
theory at least, be integrated with respect to t. The left hand side is the more
interesting part. Take another look of the left side *f) and compare it with
this following expression listed side▯by▯side:
▯(t) y′ + ▯(t)p(t) y ↔ ▯(t) y′ + ▯′(t) y
The second expression is, by the product rule of differentiation, nothing
more than (▯(t)y)′ . Notice the similarity between the two expressions.
Suppose the simple differential equation▯(t)p(t) = ▯′(t)could be satisfied,
we would then have
▯(t) y′ + ▯(t)p(t) y = ▯(t) y′ + ▯′(t) y = (▯(t) y)′
Trivially, then, the left side of (*) could be integrated with respect to t.
∫ (▯(t) y′ + ▯(t)p(t) y)dt = ∫ (▯(t) y)′dt = ▯(t) y
© 2008, 2012 Zachary S Tseng A▯1 ▯ 15 Hence, to solve (*) we integrate both sides:
∫ (▯(t) y′ + ▯(t)p(t) y)dt = ∫ ▯(t)g(t)dt
→ ▯(t) y = ∫ ▯(t)g(t)dt (**)
Therefore, the general solution is found after we divide the last equation
through by the integrating f▯(t).
But before we can solve for the general solution, we must take a step back
and find this (almost magical!) integrating factor have seen on the
last page that it must satisfies the ▯(t)p(t) = ▯′(. This is a
simpler equation that can be solved by our first method of separate the
variables then integrate:
→ ∫p(t) dt = ln|▯(t)| + C
∫p(t)dt ln ▯(t) C
→ e = e e
e∫ = C ▯(t)
© 2008, 2012 Zachary S Tseng A▯1 ▯ 16 This is the general solution, of course. We just need one instance of it.
Since any nonzero function of the above form can be used as the integrating
factor, we will just choose the simplest one, that o1 C1. As a result
▯(t) = e .
Once it is found, we can immediately divide both sides of the equatio(**)
by ▯(t) to find y(t), using the formula
∫ ▯(t)g(t)dt (+ C )
Note: In order to use this integrating factor method, the equation must be
put into the standard form first (i.e. y′▯term must have coefficient 1). Else
our formulas won’t work.
Comment: As it turns out, what we have just discovered is a very powerful
tool. As long as we are able to integrate the two required integrals, this
integrating factor method can be used to solve anyfirst order linear ordinary
© 2008, 2012 Zachary S Tseng A▯1 ▯ 17 Example: We will use our new found general purpose method to again
solve the equation
y′ − r y = k, r ≠ 0.
The equation is already in its standard form, with p(t) = − r and
g(t) = k.
∫−r dt −r t
The integrating factor ist) = e = e .
The general solution is
1 −rt rt− k −rt − k rt
y = −rt ∫ e k dt)= e e + C +Ce
e r r
That is it!
(It looks slightly different, but this is indeed the same solution we
found a little earlier using a different method.)
© 2008, 2012 Zachary S Tseng A▯1 ▯ 18 Example: We have previously seen the direction field showing the
approximated graph of the solutions of
y′ = t − y.
Now let us apply the integrating factor method to solve it.
The equation has as its standard form,
y′ + y = t.
Where p(t) = 1 and g(t) = t.
The integrating factor is ▯(t) = e ∫ = e t.
The general solution is, therefore,
1 t −t t t −t t t
y = et (∫ te dt = e te − e∫dt = e te −e +C )
= t −1+Ce .
© 2008, 2012 Zachary S Tseng A▯1 ▯ 19 Summary: Solving a first order linear differential equation
y′ + p(t) y = g(t)
0. Make sure the equation is in the standard form above. If the leading
coefficient is not 1, divide the equation through by the coefficient of y′▯term
first. (Remember to divide the right▯hand side as well!)
1. Find the integrating factor:
▯(t) = e
2. Find the solution:
y(t) = ∫ ▯(t)g(t)dt +C( )
This is the general solution of the given equation. Always remember to
include the constant of integration, which is included in the formula above as
“(+ C)” at the end. Like an indefinite integral (which gives us the solution in
the first place), the general solution of a differential equation is a set of
infinitely many functions containing one or more arbitrary constant(s).
© 2008, 2012 Zachary S Tseng A▯1 ▯ 20 Initial Value Problems (I.V.P.)
Every time we solve a differential equation, we get a general solution that is
really a set of infinitely many functions that are all solutions of the given
equation. In practice, however, we are usually more interested in finding
some specific function that satisfies a given equation and also satisfies some
additional behavioral requirement(s), rather than just finding an arbitrary
function that is a solution. The behavioral requirements are usually given in
the form of initial conditions that say the specific solution (and its
derivatives) must take on certain given values (the initial values) at some
prescribed initial time t 0. For a first order equation, the initial condition
comes simply as an additional statement in the formy(t ) = y .0That i0 to say,
once we have found the general solution, we will then proceed to substitute
t = t into y(t) and find the constant C in the general solution such that y(t ) =
y0. The result, if it could be found, is a specific function (or functions) that
satisfies both the given differential equation, and the condition that the point
(t0, 0 ) is contained on its graph. Such a problem where both an equation
and one or more initial values are given is called an initial value problem
(abbreviated as I.V.P. in the textbook). The specific solution thusly found is
called a particular solution of the differential equation.
Graphically, the general solution of a first order ordinary differential
equation is represented by the collection of all integral curves in a direction
field, while each particular solution is represented individually by one of the
To summarize, an initial value problem consists of two parts:
1. A differential equation, and
2. A set of initial condition(s).
We first solve the equation to find the general solution (which contains one
or more arbitrary constants or coefficients). Then we use the initial
condition(s) to determine the exact value(s) of those constant(s). The result
is a particular solution of the equation.
© 2008, 2012 Zachary S Tseng A▯1 ▯ 21 Example: Solve the initial value problem
t y′ − 2y = t e − 4, y(1) = 2.
First divide both sides by t.
y − 2 y = t e − 4
p(t) = −2 g(t) = t e − 4
→ t , and t .
The integrating factor is
∫−2dt −2ln t ln t2 −2 −2
▯(t) = e t = e = e = t = t .
The general solution is
y = t−2 t e − dt =t 2 (e − 4t −3 dt = t)2 e + 2t −2+(C )
t−2 ∫ t ∫
2 t 2
= t e + 2+Ct
Apply the initial condition
2 1 2
y(1) = 2 = 1 e + 2 + C1 = e + 2 + C
0 = e + C → C = −e
y = t e + 2 − et 2
© 2008, 2012 Zachary S Tseng A▯1 ▯ 22 Example: Solve the initial value problem
cos(t)y′ − sin(t) y = 3tcos(t), y(2π) = 0.
Divide through by cos(t): y′ − tan(t) y = 3t
p(t) = − tan(t) and g(t) = 3t
The integrating factor is(t) = e . (What is this function?)
Use the u▯substitution, let u = cos(t) then du = −sin(t)dt:
∫ − tan(t)dt =∫ = ∫ = ln u + C = ln cos(t) + C
Near t0= 2π, cos(t) is positive, so we could drop the absolute value.
Hence, ▯(t) = e =e = cos(t) .
y(t) = ∫3tcos(t)dt = (tsin(t)− s∫n(t)dt )
= tsin(t) + cos(t) +C = 3t ta n(t)+3+Csec(t)
y(2π) = 0 = 6πtan(2π) + 3 + Csec(2π) = 0 + 3 + C = 3 + C
C = −3
y(t) = 3ttan(t) + 3 − 3sec(t).
© 2008, 2012 Zachary S Tseng A▯1 ▯ 23 The Existence and Uniqueness Theorem (of the solution a first
order linear equation initial value problem)
Does an initial value problem always a solution? How many solutions are
there? The following theorem states a precise condition under which exactly
one solution would always exist for a given initial value problem.
Theorem: If the functions p and g are continuous on the interval I: α < t < β
containing the point t = t , then there exists a unique function y = φ(t) that
satisfies the differential equation
y′ + p(t)y = g(t)
for each t in I, and that also satisfies the initial condition
y(t0) = y 0
where y i0 an arbitrary prescribed initial value.
That is, the theorem guarantees that the given initial value problem will
always have (existence of) exactly one (uniqueness) solution, on any interval
containing t as long as both p(t) and g(t) are continuous on the same interval.
The largest of such intervals is called the interval of validity of the given
initial value problem. In other words, the interval of validity is the largest
interval such that (1) it contains t 0 and (2) it does not contain any
discontinuity of p(t) nor g(t). Conversely, neither existence nor uniqueness
of a solution is guaranteed at a discontinuity of either p(t) or g(t).
Note that, unless t 0s actually a discontinuity of either p(t) or g(t), there
always exists a non▯empty interval of validity. If, however, t 0 is indeed a
discontinuity of either p(t) or g(t), then the interval of validity will be empty.
Clearly, in such a case the conditions that the interval must contain t and 0
that it must not contain a discontinuity of p(t) or g(t) will be contradicting.
© 2008, 2012 Zachary S Tseng A▯1 ▯ 24 If so, such an initial value problem is not guaranteed to have a unique
solution at all.
Example: Consider the initial value problem solved earlier
cos(t)y′ − sin(t)y = 3tcos(t), y(2π) = 0.
The standard form of the equation is
y′ − tan(t)y = 3t
with p(t) = − tan(t) and g(t) = 3t. While g(t) is always continuous, p(t)
has discontinuities at t = ±π/2, ±3π/2, ±5π/2, ±7π/2, … According to
the Existence and Uniqueness Theorem, therefore, a continuous and
differentiable solution of this initial value problem is guaranteed to
exist uniquely on any interval containing t = 0π but not containing
any of the discontinuities. The largest such intervals is (3π/2, 5π/2).
It is the interval of validity of this problem. Indeed, the actual
solution y(t) = 3ttan(t) + 3 − 3sec(t) is defined everywhere within
this interval, but not at either of its endpoints.
© 2008, 2012 Zachary S Tseng A▯1 ▯ 25 How to find the interval of validity
For an initial value problem of a first order linear equation, the interval of
validity, if exists, can be found using this following simple procedure.
Given: y′ + p(t)y = g(t), y(t0) = y 0
1. Draw the number line (which is the t▯axis).
2. Find all the discontinuities of p(t), and the discontinuities of g(t). Mark
them off on the number line.
3. Locate on the number line the initial time t 0. Look for the longest
interval that contains t 0 but contains no discontinuities.
Step 1: Draw the t▯axis.
Step 2: Mark off the discontinuities.
Step 3: Locate t a0d determine the interval of validity.
© 2008, 2012 Zachary S Tseng A▯1 ▯ 26 Example: Consider the initial value problems
(a) (t − 81)y′ + 5e y = sin(t), y(1) = 10π
(b) (t − 81)y′ + 5e y = sin(t), y(10π) = 1
The equation is first order linear, so the theoremapplies. The standard form
of the equation is
5e 3t sin(t)
y + y =
t −81 t −81
with p(t) = t −81 and g(t) = t −81 . Both have discontinuities at t = ±9.
Hence, any interval such that a solution is guaranteed to exist uniquely must
contain the initial time t0but not contain either of the points 9 and −9.
In (a), 0 = 1, so the interval contains 1 but not ±9. The largest such
interval is (−9, 9).
In (b), 0 = 10π, so the interval contains 10π but neither of ±9. The
largest such interval is (9, ∞).
Remember that the value of y does not matter at all, t alone determines the
Suppose the initial condition is y(−100) = 5 instead. Then the largest
interval on which the initial value problem’s solution is guaranteed to exist
uniquely will be (−∞, −9).
Lastly, suppose the initial condition is y(−9) = 88. Then we would not be
assured of a unique solution at all. Since t = −9 is both t a0d a discontinuity
of p(t) and g(t). The interval of validity would be, therefore, empty.
© 2008, 2012 Zachary S Tseng A▯1 ▯ 27 Depending on the problem, the interval of validity, if exists, could be as
large as the entire real line, or arbitrarily small in length. The following
example is an initial value problem that has a veryshort interval of validity
for its unique solution.
Example: Consider the initial value problems
(t − 10 −2000000)y′ + ty = 0, y(0) = α.
With the standard form
y' − 2 −2000000 y = 0 ,
the discontinuities (of p(t)) are t = ±10−100000. The initial time is t = 0.
Therefore, the interval of validity for its solution is the interval − 10 ,
10 −100000), an interval of length 2×10 −100000units!
However, the important thing is that somewhere on the t▯axis a unique
solution to this initial value problem exists. Different initial value α will
give different particular solution. But the solution will each uniquely exist,
at a minimum, on the interval ( − 10 −100000,10 −100000).
Again, according to the theorem, the only time that a unique solution is not
guaranteed to exist anywhere is whenever the initial time t jus0 happens to
be a discontinuity of either p(t) or g(t).
Now suppose the initial condition is y(0) = 0. It should be fairly easy to see
that the constant zero function y(t) = 0 is a solution of the initial value
problem. It is of course the unique solution of this initial value problem.
Notic−1000000his −1000000 exists for all values of t, not just inside the interval
(− 10 , 10 ). It exists even at discontinuities of p(t). This
illustrates that, while outside of the interval of validity there is no guarantee
that a solution would exist or be unique, the theorem nevertheless does not
prevent a solution to exist, even uniquely, where the condition required by
the theorem is not met.
© 2008, 2012 Zachary S Tseng A▯1 ▯ 28 Nonlinear Equations: Existence and Uniqueness of Solutions
A theorem analogous to the previous exists for general first order ODEs.
Theorem: Let the function f and ∂f ∂y be continuous in some rectangle α <
t < β, γ < y < δ, containing the point (t0, 0 ). Then, in some interval t0− h < t
< t0+ h contained in α < t < β, there is a unique solution y = φ(t) of the initial
y′ = f(t,y), y(t0) = y 0
This is a more general theorem than the previous that applies to all first
order ODEs. It is also less precise. It does not specify a precise region that
a given initial value problem would have a solution or that a solution, when
it exists, is unique. Rather, it states a region that somewhere within there has
to be part of it in which a unique solution of the initial value problem will
exist. (It does not preclude that a second solution exists outside of it.)
The bottom line is that a nonlinear equation might have multiple solutions
corresponding to the same initial condition. On the other hand it is also
possible that it might not have a solution defined on parts of the region
where f and ∂f ∂y are both continuous.
Example: Consider the (nonlinear) initial value problem
y′ = t y 1/, y(0) = 0.
When t = 0, ∂f ∂y is not continuous. Therefore, it would not necessarily
have a unique solution. Indeed, both y = t and y = 0 are functions that
satisfy the problem. (Verify this fact!)
© 2008, 2012 Zachary S Tseng A▯1 ▯ 29 Exercises A▯1.2:
1 – 4 Find the general solution of each equation below.
1. y′ − t y = 4t
2. y′ + 10 y = t2
3. t2 y' − e y = 0
4. y′ − y = 2e
5 – 15 Solve each initial value problem. What is the largest interval in
which a unique soluti−t is guaranteed to exist?
5. y′ + 2y = t e , y(0) = 2
6. y′ − 11y = 4e , y(0) = 9
7. ty′ − y = t + t, y(1) = 5
8. (t + 1)y′ − 2ty = t + t, y(0) = −4
9. y′ + (2t – 6t ) y = 0, y(0) = −8
10. t y + 4ty = , y(−2) = 0
11. (t − 49)y′ + 4ty = 4t, y(0) = 1/7
12. y′ − y = t + t, y(0) = 3
13. y′ + y = e , y(0) = 1
14. ty′ + 4y = 4, y(−2) = 6
15. tan(t)y′ − sec(t)tan (t)y = 0, y(0) = π
© 2008, 2012 Zachary S Tseng A▯1 ▯ 30 16 – 19 Without solving the initial value problem, what is the largest interval
in which a unique solution is guaranteed to exist for each initial condition?
(a) y(π) = 7, (b) y(1) = −9, (c) y(−4) = e.
(t −8)(t −1) t
16. (t + 5) y′ + y =
t −3 (t −6)(t +1)
17. t y′ + t − 2y = sec(t/3)
18. (t + 4t − 5)y′ + tan(2t)y = t − 16
19. (4 − t 2)y′ + ln(6 − t)y = e −t
20. Find the general solution of t 2 y′ + 2ty = 2. Then show that both the
initial conditions y(1) = 1 and y(−1) = −3 result in an identical particular
solution. Does this fact violate the Existence and Uniqueness Theorem?
© 2008, 2012 Zachary S Tseng A▯1 ▯ 31 Answers A▯1.2:
1. y = −4+Ce t /3
t t 1 −10t
2. y = 10 − 50 + 500 +Ce
3. y = C exp 3e
4. y = 2te + Ce
5. y = te − e + 3e , −2t (−∞, ∞)
y = 49 e11t− e 6t
6. 5 5 , (−∞, ∞)
7. y = t + tlnt + 4t, (0, ∞)
8. y = (t +1)(ln( t +1)−4) , (−∞, ∞)
9. y = −8exp(2t – t ), (−∞, ∞)
10. y = 2 − 4 , (−∞, 0)
t −98t +343
11. y = , (−7, 7)
(t −49) 2
12. y = 6e − t – 3t – 3, (−∞, ∞)
1 t 1 −t
13. y = e + e = cosh( ) , (−∞, ∞)
14. y = 1 + 80t , (−∞, 0).
π sec(t) sec(t)−1
15. y = e =πe , (−π/2, π/2)
16. (a) (3, 6); (b) (−1, 3); (c) (−5, −1).
17. (a) (0, 3π/2); (b) (0, 3π/2); (c) (−3π/2, −3).
18. (a) (3π/4, 5π/4); (b) no such interval exists; (c) (−5, −5π/4).
19. (a) (2, 6); (b) (−2, 2); (c) (−∞, −2).
20. y = 2t +C ; they both have y = 2t −1 as the solution; no, different initial
conditions could nevertheless give the same unique solution.
© 2008, 2012 Zachary S Tseng A▯1 ▯ 32 Separable Differential Equations
A first order differential equation is separable if it can be written in the form
M(x) + N(y)y′ = 0,
where M(x) is a function of the independent variable x only, and N(y) is a
function of the dependent variable y only. It is called separable because the
independent and dependent variables could be moved to separate sides of the
N(y) = −M(x) .
Multiplying through by dx,
N(y)dy = −M (x)dx.
A general solution of the equation can then be found by simply integrating
both sides with respect to each respective variable:
∫ N(y)dy = − M(x∫dx + C .
This is the implicit general solution of the equation, where y is defined
implicitly as a function of x by the above equation relating the
antiderivatives of M(x) and N(y).
An explicit general solution, in the form oy = f (x), where y is explicitly
defined by a function f(x) which itself satisfies the original differential
equation, could be found (in theory, although not always in practice) by
simplifying the implicit solution and solve for y.
© 2008, 2012 Zachary S Tseng A▯1 ▯ 33 y dy 3
Example: Solve e − x− x = 0
First, separate the x▯ and y▯terms.
y dy 3
e = x + x
Then multiply both sides by dx and