### Fourier series and transform

Hitherto, we've talked about "frequency components" of the sound being captured and reconstructed. But what is meant by this? In 1822 the French mathematician Jean-Baptiste Joseph Fourier examined how periodic signals could be broken down into a, potentially infinite, series of sinusoids each with its own amplitude and integer-multiple of the fundamental frequency. Creating an arbitrary, periodic waveform from base sinusoids is the *synthesis* of that signal. Going the opposite direction would be *decomposing* the combined signal into those base sinusoids. The Fourier series is a way to represent a periodic function as this sum of sinusoids. In other words, any periodic signal like a square wave, for example, is made up of sinusoids and can be decomposed back into a sum of those sinusoids. Using this square wave as an example, it is made up of a sinusoid of the fundamental frequency at unit amplitude, plus the summation of all odd multiples of this base frequency and at a scaled down amplitude.

$$x(t) = \sum_{n=0}^{2} \frac{1}{2n+1} sin(2 \pi (2n+1) \cdot 100 Hz \cdot t)$$

This plot looks complicated until it is broken down. The fundamental frequency component of 100 Hz is shown as the blue sinusoid and is also called the "first harmonic" of the signal. There are also two more harmonics which in this case are the first two odd multiples of 100 Hz, namely 300 Hz and 500 Hz. This are the third and fifth harmonics of the fundemental frequency. These sinusoidal harmonics are inversely scaled by their harmonic number so that they are one-third and one-fifth the amplitude of the fundamental respectively. If you were to sum up all three harmonics, it would yield the purple composite signal. You can see, with only three sinusoids in the series, the square wave already begins to take shape. Adding more harmonic sinusoids into the series continually improves the composite signal, approaching a perfect representation of the square wave at the limit of infinitely many harmonics. In reality, only a dozen or so harmonics would be required to yield a representation with an acceptably low error.

Expanding on this idea is the Fourier transform (FT). The Fourier transform has the nice benefit of being valid for more than periodic signals; any non-periodic "chunk" of a signal can be considered one period of some quasi-periodic signal made up of repeating the chunk. A signal represented as amplitude-vs-time is said to be in the *time-domain* whereas if it is decomposed via a Fourier method, you are considering the very same signal in the *frequency-domain*. In this way, the FT is used to transform signals between these two domains, even non-periodic ones. One important distinction is whether the signals you are considering are made up of a continuous function in time, so called *continuous-time* signals, or are discretised representations, so called *discrete-time* signals. There are forward and reverse Fourier transformations for both cases, the continuous-time Fourier transformation (CTFT, or CFT), and the discrete-time Fourier transformation (DTFT, or DFT). One last complicated noodle in this alphabet soup is the Fast Fourier Transform (FFT), a specific algorithm for calculating the DFT faster than the rote definition of the transform on typical processor archetectures. The FFT reduces the complexity of the DFT from $O(n^2)$ to $O(n \log n)$.

If we take the previous, purple, composite signal $x(t)$ of three harmonics, and perform the DFT on it (since it is comprised of samples stored in an array on the computer), we can see what and how much of each frequency make it up.

Well, that's not much to look at. This plot displays all of the frequency content in the human audible range, all the way up to 22 050 Hz. We know, however, that all the action (and energy of the signal) resides down in the hundreds of hertz. Zooming in on this region sheds a little more light on the make-up of that original signal.

Now it is quite obvious that there is quite a bit of energy down at 100 HZ, the fundamental frequency, but also smaller components at 300 Hz and 500 Hz; about one third, and one fifth respectively. The Fourier transform has allowed us to break down the complicated periodic signal $x(t)$ into a representation that shows its frequency-domain make up! In this way, we know the "recipe" of the original signal is a whole lot of 100 Hz sine wave, with a few dashes of 300 Hz, and a pinch of 500 Hz.

This transform is a ubiquitous tool of analysis in science, engineering and mathematics. It allows investigation into very complicated phenomena that evolve over time and/or space. It is difficult to overstate how ubiquitous the Fourier transform is in the STEM fields and even beyond. It is used to pick apart population data in censuses and animal populations, find patterns in financial trade data, find optimal geometries for grand theatres, and more.

### Parseval's Theorem

One detour worth talking about is Parseval's theorem. Due to the unitary property of the Fourier transformation, the following identity holds:

$$ \int_{-\infty}^{\infty} |x(t)|^2 dt = \int_{-\infty}^{\infty} |X(2 \pi f \cdot t)|^2 df$$

This is to say, Parseval's theorem gives the relationship between the integration of the squared function in time with the integration of the squared spectrum in frequency. Respectively, they yield the total energy of a signal by summing power-per-sample across time, or spectral power over frequency. Using Parseval's theorem was actually a valid, tested solution to the whole project and resulted in a very concise program with the help of the NumPy. However, one purpose of the Music Lights project was to learn about digital filtering, hence we move forward to the next section!