1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
|
# Lecture Notes
**[Lucas' Notes for Lecture 3](/SummerCourseHeatWave.pdf)**
# Unanswered Questions
Below are answers to some questions that came up during the lecture. If anyone remembers other unanswered questions please post them here as well, and hopefully they'll get answered.
## Heat Equation With Nonperiodic Boundary Conditions
Suppose instead of having a metal ring we had a metal rod of length $L$, and we kept the ends of the rod at constant temperatures $u_0$ and $u_L$. How might we solve the heat equation in this context?
There is one obvious solution that satisfies these boundary conditions, namely the time-independent or steady state solution
$$ g(x,t) = u_0 + \frac{u_L- u_0}{L}x $$
This satisfies the heat equation for trivial reasons since it is time-independent and its second spatial derivative is zero, hence both sides of the heat equation are zero independently of one another.
Now suppose that $h(x,t)$ is another solution satisfying the same boundary conditions. Then the function
$$ u(x,t) = g(x,t) - h(x,t)$$
also satisfies the heat equation, but it is zero at both endpoints:
$$ u(x,0) = u(x,L) = 0 $$
To solve for $h$, it is clear that we only need to solve for $u$. First we'll use what's called the ``separation of variables'' trick to generate a lot of nice solutions, then hope and pray that any other solution can be expressed as a linear combination of these.
Here's how the separation of variables trick works. We seek a solution of the form
$$u(x,t) = a(t)b(x)$$
for some functions $a$ and $b$. Then the heat equation tells us:
$$ \frac{da}{dt}b = a \frac{d^2b}{dx^2} $$
Rearranging, we get:
$$ \frac{da}{dt}/a = \frac{d^2b}{dx^2}/b $$
The left hand side only depends on $t$ and the right hand side depends only on $x$. Therefore, neither side depends on either $x$ or $t$. We conclude that both sides are equal to a constant, which for convenience we'll write as $-\lambda^2$.
First we solve for $a$. It satisfies the equation
$$ \frac{da}{dt} = - \lambda^2 a$$
and therefore (up to a constant multiple) it is given by:
$$ a(t) = e^{-\lambda^2 t} $$
Next we solve for $b$. It satisfies the equation
$$ \frac{d^2b}{dx^2} = -\lambda^2 b $$
However we have to be a bit more careful in picking our solutions because $b$ is supposed to satisfy the boundary conditions
$$ b(0) = b(L) = 0$$
To satisfy $b(0) = 0$, we must take $b$ to be (a constant multiple of) a sine function:
$$ b(x) = \sin(\lambda x) $$
and to satisfy $b(L) = 0$, we must impose a constraint on $\lambda$:
$$ \lambda = \frac{\pi n}{L} $$
So, the most general solution we can generate in this manner is:
$$ u(x,t) = \sum_{n = 1}^{\infty} c_n e^{-\frac{\pi^2 n^2 t}{L^2}} \sin \left(\frac{\pi n x}{L} \right) $$
We would like to assert that any solution takes this form. One way to prove this assertion would be to show that any function $f:[0,L] \to \mathbb{R}$ satisfying $f(0) = f(L) = 0$ has a unique ``[Fourier sine expansion](http://mathworld.wolfram.com/FourierSineSeries.html)'':
$$ f(x) = \sum_{n = 1}^{\infty} c_n \sin\left(\frac{\pi n x}{L}\right) $$
One could then allow the coefficients $c_n$ to vary with $t$ and apply the same method of solution that we used in the case of periodic boundary conditions.
In fact, every function of the kind described above does have a Fourier sine expansion. The link above contains a hint of how to do it. First you extend the function $f(x)$ to a certain odd, periodic function $\tilde{f}(x)$ defined on the interval $[-L,L]$. Then you can use convergence of the usual Fourier series for $\tilde{f}(x)$.
## Convergence for not-so-nice Fourier series.
How do we know that the Fourier series of a square wave or sawtooth function converges?
The answer to this question depends greatly on the type of convergence desired. Aside from the convergence we already proved, the next easiest type of convergence is $L^2$ convergence. The formal statement is that
$$ \lim_{N \to \infty} \sqrt{\int_0^{2\pi} \left| f(\theta) - \sum_{n = - N}^N c_n e^{in\theta} \right|^2} = 0 $$
In other words, as we let $N$ go to infinity, the root mean square error of our approximation gets arbitrarily small.
|