# Lecture Notes **[Lucas' Notes for Lecture 3](/SummerCourseHeatWave.pdf)** The goal of the lecture (and the only take-away point, as far as the rest of the course is concerned!) was to prove the following two theorems: Theorem 1: If $f:A \to \C$ is a holomorphic function on an annulus, then it has a Laurent series expansion $$ f(z) = \sum_{n = -\infty}^{\infty} c_n z^n = \cdots \frac{c_{-2}}{z^2} + \frac{c_{-1}}{z} + c_0 + c_1 z + c_2 z^2 + \cdots $$ Theorem 2: If $f: D \to \C$ is a holomorphic function on a disk, then it has a power series expansion $$ f(z) = \sum_{n = 0}^{\infty} c_n z^n = c_0 + c_1 z + c_2 z^2 + \cdots $$ The reason we discussed convergence of Fourier series was to give some taste for the type of mathematical analysis that goes in to proving things rigorously using Fourier series. The reason we discussed the Heat and Wave equations was to illustrate other examples of the methods we used to prove Theorems 1 and 2. So, if you only care about holomorphic functions you don't need to worry about those examples. ## Alternate Proofs You may find it helpful to think about other ways of deriving Theorems 1 and 2. For an alternate proof of Theorem 1 (which may be more comprehensible, since it doesn't involve any confusing changes of coordinates), see Problems 7 and 8 of Problem Set 3. An alternate proof of Theorem 2 goes as follows: Since $f$ is holomorphic on a disk, it has a Laurent expansion. The statement of Theorem 2 says that the negative terms in this Laurent expansion are zero. First let's prove that $c_{-1}$ is zero. Since $c_{-1}$ is the residue of $f$ at zero, it is given by $$c_{-1} = \int_{\gamma_r} f(z) dz$$ where $\gamma$ is a small circle of radius $r$ that goes counterclockwise around the origin. As we shrink the radius of this circle, its length goes to zero. On the other hand since $f(z)$ tends to $f(0)$. Taking the limit as $r \to 0$, $$ c_{-1} = \lim_{r \to 0} \int_{\gamma_r} f(z) dz = \lim_{r \to 0} 2 \pi r f(0) = 0 $$ so we conclude that $c_{-1} = 0$. Now let's prove that $c_{-2}$ has to be zero. Consider the function $$ g(z) = z f(z) = \cdots \frac{c_{-3}}{z^2} + \frac{c_{-2}}{z} + c_{-1} + c_0 z + c_1 z^2 + \cdots $$ Since $g$ is also holomorphic on $D$, and its residue is $c_{-2}$, we conclude by the above argument that $c_{-2} = 0$. More generally, the function $z^{k-1}f(z)$ is holomorphic and its residue at $0$ is $c_{-k}$, so we conclude that $c_{-k} = 0$. # Unanswered Questions Below are answers to some questions that came up during the lecture. If anyone remembers other unanswered questions please post them here as well, and hopefully they'll get answered. ## Heat Equation With Nonperiodic Boundary Conditions Suppose instead of having a metal ring we had a metal rod of length $L$, and we kept the ends of the rod at constant temperatures $u_0$ and $u_L$. How might we solve the heat equation in this context? There is one obvious solution that satisfies these boundary conditions, namely the time-independent or steady state solution $$ g(x,t) = u_0 + \frac{u_L- u_0}{L}x $$ This satisfies the heat equation for trivial reasons since it is time-independent and its second spatial derivative is zero, hence both sides of the heat equation are zero independently of one another. Now suppose that $h(x,t)$ is another solution satisfying the same boundary conditions. Then the function $$ u(x,t) = g(x,t) - h(x,t)$$ also satisfies the heat equation, but it is zero at both endpoints: $$ u(x,0) = u(x,L) = 0 $$ To solve for $h$, it is clear that we only need to solve for $u$. First we'll use what's called the ``separation of variables'' trick to generate a lot of nice solutions, then hope and pray that any other solution can be expressed as a linear combination of these. Here's how the separation of variables trick works. We seek a solution of the form $$u(x,t) = a(t)b(x)$$ for some functions $a$ and $b$. Then the heat equation tells us: $$ \frac{da}{dt}b = a \frac{d^2b}{dx^2} $$ Rearranging, we get: $$ \frac{da}{dt}/a = \frac{d^2b}{dx^2}/b $$ The left hand side only depends on $t$ and the right hand side depends only on $x$. Therefore, neither side depends on either $x$ or $t$. We conclude that both sides are equal to a constant, which for convenience we'll write as $-\lambda^2$. First we solve for $a$. It satisfies the equation $$ \frac{da}{dt} = - \lambda^2 a$$ and therefore (up to a constant multiple) it is given by: $$ a(t) = e^{-\lambda^2 t} $$ Next we solve for $b$. It satisfies the equation $$ \frac{d^2b}{dx^2} = -\lambda^2 b $$ However we have to be a bit more careful in picking our solutions because $b$ is supposed to satisfy the boundary conditions $$ b(0) = b(L) = 0$$ To satisfy $b(0) = 0$, we must take $b$ to be (a constant multiple of) a sine function: $$ b(x) = \sin(\lambda x) $$ and to satisfy $b(L) = 0$, we must impose a constraint on $\lambda$: $$ \lambda = \frac{\pi n}{L} $$ So, the most general solution we can generate in this manner is: $$ u(x,t) = \sum_{n = 1}^{\infty} c_n e^{-\frac{\pi^2 n^2 t}{L^2}} \sin \left(\frac{\pi n x}{L} \right) $$ We would like to assert that any solution takes this form. One way to prove this assertion would be to show that any function $f:[0,L] \to \mathbb{R}$ satisfying $f(0) = f(L) = 0$ has a unique ``[Fourier sine expansion](http://mathworld.wolfram.com/FourierSineSeries.html)'': $$ f(x) = \sum_{n = 1}^{\infty} c_n \sin\left(\frac{\pi n x}{L}\right) $$ One could then allow the coefficients $c_n$ to vary with $t$ and apply the same method of solution that we used in the case of periodic boundary conditions. In fact, every function of the kind described above does have a Fourier sine expansion. The link above contains a hint of how to do it. First you extend the function $f(x)$ to a certain odd, periodic function $\tilde{f}(x)$ defined on the interval $[-L,L]$. Then you can use convergence of the usual Fourier series for $\tilde{f}(x)$. ## Convergence for not-so-nice Fourier series. How do we know that the Fourier series of a square wave or sawtooth function converges? The answer to this question depends greatly on the type of convergence desired. Aside from the convergence we already proved, the next easiest type of convergence is $L^2$ convergence. The formal statement is that $$ \lim_{N \to \infty} \sqrt{\int_0^{2\pi} \left| f(\theta) - \sum_{n = - N}^N c_n e^{in\theta} \right|^2} = 0 $$ In other words, as we let $N$ go to infinity, the root mean square error of our approximation gets arbitrarily small.