false

Class Notes
(838,387)

Canada
(510,873)

University of Toronto St. George
(44,093)

Physics
(297)

PHY354H1
(28)

Erich Poppitz
(25)

Lecture

Unlock Document

Physics

PHY354H1

Erich Poppitz

Winter

Description

Lecture 14: Fourier Series Examples
▯ Last time: If we represent a function f(t) that is periodic on T as a
fourier series:
1 ▯ ▯ ▯ ▯ ▯▯
X 2▯n 2▯n
f(t) = A0+ Ancos T t + B nin T t (1)
n=1
then we derived formulas for the amplitudes of the sinusoids in the
fourier series: Z ▯ ▯
2 T 2▯n
Bn= f(t)sin t dt (2)
T 0 T
2 Z T ▯2▯n ▯
An= f(t)cos t dt (3)
T 0 T
Z T
A = 1 f(t)dt (4)
0 T 0
▯ As an example, we found the amplitudes for the square wave:
8
< F0 0 ▯ t ▯ ▯
f(t) = ▯F 0 ▯ < t ▯ 2▯ (5)
:
repeat
Here were the amplitudes we found:
A 0 0
A n 0
2F0 n
B n ▯n(1 ▯ (▯1) )
This last one can be simpli▯ed a bit to read:
B n 0 for n even and
B n 4F0 for n odd.
▯n
▯ So our Fourier series for the square wave can be written:
1
X 4F0
f(t) = sinnt (6)
n=1;n▯
n odd
▯ Today we will investigate how this sum gives us the square wave. We
will also try to develop some intuition about0why A nnd A were 0 in
this example, and we will do another example.
1 Plotting the Fourier series for the square wave
▯ The python program fourier squarewave.py plots the square wave de-
▯ned in equation (5) over 2 whole periods and also plots the fourier
series in equation (6) from n = 1 to N where you can specify N. Al-
though the fourier series only exactly represents the square wave when
▯1
N = 1, the fact that our coe▯cients B are proportionan to n sug-
gests that the higher n terms will be small relative to the lower n terms.
▯ First lets start by setting N = 1. This means we are only taking the
▯rst sinusoid in the sum. Here is a plot of the results:
Fourier series for square wave with 1 components
1.5
Square wave
Fourier series
1.0
0.5
f(t)0
0.5
1.0
1.5
0.0 0.5 1.0 1.5 2.0 2.5 3.0 3.5 4.0
time/pi
Although the single term doesn’t do well in getting the small details
right, it does do well in getting the general large scale structure right.
For example, notice that the square wave and the fourier series are both
position in the same regions and negative in the same regions. However
we do notice that the single sinusoid underestimates the magnitude of
f(t) in three regions during a single period: near 0;▯, and 2▯ and
overestimates it in 2 regions in a single period: around ▯=2 and 3▯=2.
In order to ‘▯x’ these under and over estimates (which are shorter in
scale than our ▯rst sinusoid), we will need a sinusoid with a higher
frequency.
▯ So lets run the python program again with N = 2 to include the ▯rst
and second terms in the sum. Here is what we get:
2 Fourier series for square wave with 2 components
1.5
Square wave
Fourier series
1.0
0.5
f(t)0
0.5
1.0
1.0.0 0.5 1.0 1.5 2.0 2.5 3.0 3.5 4.0
time/pi
Compared to the previous picture we are doing a little better here.
Adding this second sinusoid kind of ‘
attened’ the ▯rst sinusoid near its
maximum and minimum and therefore made it closer in resemblance
to the square wave. Also notice that the number of regions where
over and under estimates occur increases, but the width of the regions
decreases (i.e. the ‘errors’ are getting smaller in scale. For example, in
a single period, I count 6 regions where the magnitude of the function is
underestimated and 4 regions where it is overestimated (compare this
to the 3 regions of overestimation and 2 regions of underestimation
when N = 1).
▯ As we continue to increase N, this trend with continue. The superpo-
sition of sinusoids will continue to ‘
atten’ where they should and the
scale of the errors will decrease. For example, here is the plot when
N = 10 (I recommend trying others yourself):
Fourier series for square wave with 10 components
1.5
Square wave
Fourier series
1.0
0.5
f(t)0
0.5
1.0
1.5
0.0 0.5 1.0 1.5 2.0 2.5 3.0 3.5 4.0
time/pi
3 ▯ We are doing a pretty good job now of representing the square wave
with sinusoids. Notice that there are distinct places where the errors
are worst: at t = 0;▯ and 2▯. Notice these are the regions where the
square wave abruptly changes value (there is a discontinuity at these
places). Because sinusoids are SOOOO good at being continuous and
forever di▯erentiable, it is actually hard for them to represent discon-
tinuities, which is why you get the largest errors at the discontinuities.
Also notice that the amplitudes of the error in these locations is NOT
decreasing (although it is elsewhere). This is known as the \Gibbs Phe-
nomenon" and happens near discontinuities when using Fourier series.
▯ Lets try a big N value, say N = 100. Then we get:
Fourier series for square wave with 100 components
1.5
Square wave
Fourier series
1.0
0.5
f(t)0
0.5
1.0
10.0 0.5 1.0 1.5 2.0 2.5 3.0 3.5 4.0
time/pi
Now we really are doing well except for near the discontinuities. How-
ever, notice that the WIDTH of the errors near the discontinuities con-
tinue to shrink (even if their height doesn’t). This means they can’t
a▯ect the dynamics much. In the limit that N ! 1, the width is 0
and therefore, they don’t a▯ect anything at all.
▯ Just for fun, and because we would never do it without a computer
program, lets try N = 10000:
4 1.5 Fourier series for square wave with 10000 components
Square wave
Fourier series
1.0
0.5
f(t)0
0.5
1.0
1.5
0.0 0.5 1.0 1.5 time/pi2.5 3.0 3.5 4.0
Notice that it now looks like the Gibbs phenomenon is gone, but this
is just because our time increments are larger than the width of the
Gibbs error, so we don’t see it (the program misses plotting the points
where the Gibbs phenomena occur).
▯ You can use this python program to plot the Fourier series components
of any function as long as you know the formula for the amplitudes of
the sinusoids.
Some Intuition About the Amplitudes
▯ You might be wondering if it was just a coincidence that the A and A
0 n
amplitudes were 0 for the square wave. In fact, it was not a coincidence
and there is an easy way to predict whether a whole set of amplitudes
(like all the A ’snor all the B ’s wiln be 0 for speci▯c functions (which
is nice because then you don’t have to do the integral to ▯nd that all
the coe▯cients are 0).
▯ It turns out that it all depends on the symmetry of the function. Specif-
ically on whether the function is \odd" or \even" about t = T=2 (i.e.
about the midpoint of the integr

More
Less
Related notes for PHY354H1

Join OneClass

Access over 10 million pages of study

documents for 1.3 million courses.

Sign up

Join to view

Continue

Continue
OR

By registering, I agree to the
Terms
and
Privacy Policies

Already have an account?
Log in

Just a few more details

So we can recommend you notes for your school.

Reset Password

Please enter below the email address you registered with and we will send you a link to reset your password.