Why can we have a Fast Fourier Transform ?

12 March 2009 at 3:23 pm 3 comments

The Fourier transform was introduced by Fourier as a tool to solve heat equations, but is now used in its discrete version throughout the computing world more than trillions of times each second. It is probably more, I was assuming an average computer does billions of Discrete Cosine Transforms per day for Web browsing (JPEG images and Youtube) and music listening, but how huge is the contribution of millions of people watching MPEG-2 compressed television programs on satellite or terrestrial digital broadcasting?

It is thus important to know that the Fourier transform (which is always the Fast Fourier Transform) is really fast (especially when it is dozens, hundreds of times faster than the natural algorithm). I am not yet sure about the fast versions of variants of the discrete transform (cosine transforms and its friends), but I guess they can be derived from the classical case.

What is it ?

The classical Fourier transform takes a T-periodic function f and computes for each integer n the integral

f_n = \frac 1 {\sqrt T} \int_0^T f(t) \exp(-ni\omega t) dt

where \omega = 2\pi/T. It separates f into components having period T/n, and has the property that the energy E(f) = \frac 1 T \int |f(t)|^2 dt is the sum \sum |f_k|^2.

The discrete Fourier transform takes a discrete function given by x = (x_0, x_1, \dots, x_{n-1}) and computes the discrete analogue

y_p = \sum_{q = 0}^{n-1} x_q \exp(-2i\pi \frac{pq}{n})


which can also be represented by a matrix

F_n=\begin{pmatrix} 1 & 1 & \hdots & 1 \\ 1 & \zeta^{-1} & \hdots & \zeta^{-(n-1)} \\ 1 & \zeta^{-2} & \hdots & \zeta^{-2(n-1)} \\ \vdots & \vdots & \ddots & \vdots \\ 1 & \zeta^{-(n-1)} & \hdots & \zeta^{-(n-1)^2} \\ \end{pmatrix}

where \zeta = \exp{2i\pi/n} is the primitive n-th root of unity. It has the property F_n F_n^\ast = n, so the inverse of the Fourier transform is given by the complex conjugate matrix (up to a scalar). Moreover, it satisfies a unitarity condition, which means the Hermitian norm of vectors is preserved (up to multiplication by n).

Other interpretations

The matrix F_n is a particular case of a Vandermonde matrix, and can also be interpreted as the linear map taking a polynomial P_x(T) = x_0 + x_1 T + \cdots + x_{n-1} T^{n-1} and computing the vector (P_x(T=1), P_x(T=\zeta^{-1}), \dots, P_x(T=\zeta^{-(n-1)})). Actually, what we really want is taking a set of values, and finding a polynomial having these values at roots of unity (a polynomial being written as a sum of monomials which are the elementary periodic functions), so this is a case of Lagrange interpolation, but is is actually given by the same matrix.

Another way of saying the same thing is that the Fourier transform diagonalises product of polynomials (which corresponds to the convolution product of signals): the convolution product of two vectors is not easy to compute rapidly, but F_n(x \star y) is obtained by multiplying separately coordinates of F_n(x) and F_n(y). As a reminder :

(f \ast g)_k = \displaystyle\sum_{i+j \equiv k \mod n} f_i g_j

F_n(f \ast g)_k = (F_n f)_k (F_n g)_k

Keyword: tensor product

Humans would traditionally compute a Discrete Fourier Transform using the above formulas: each coefficient requires computing n products by roots of unity, giving the whole thing a complexity of operations. A little agebra will tell us how to give it a complexity of nk operations, when n=2^k.

The key observation is that the matrix F_n, which has size 2^k, can actually be written as a tensor product F_n = F_n^{(0)} \otimes \cdots \otimes F_n^{(k-1)}. If you don’t know tensor products, a quick way of understanding them is to say that if matrices A_i operate on vectors whose index variable is a_i (having n_i entries), the matrix A = \bigotimes A_i operates on vectors indexed by the tuple of variables (a_1, \dots, a_k) (these vectors have n_1 n_2 \cdots n_k entries), and its coefficients are

A[(a_1, \dots, a_k), (b_1, \dots, b_k)] = A_1[a_1,b_1] \times A_2[a_1,b_2] \times \cdots \times A_k[a_k,b_k]

For those of you who thrive on algebraic geometry and schemes, this translates into the following property: first remark that \{0,1\}^k \simeq \{0, \cdots, n-1\} by means of binary representation of integers. From this deduce that the scheme Z of n-th roots of unity can be seen as the image, by the map (X_0, \dots, X_{k-1}) \mapsto X_0 X_1 \cdots X_{k-1}, of the product of 2-point schemes Z_q = \mathrm{Spec}\ \mathbb C[X]/((X-1)(X-\zeta_q)) containing only 1 and \zeta_q = \zeta^{2^q}. The corresponding map of rings

\frac{\mathbb C[X]}{(X^n-1)} \to \frac{\mathbb C[X_{k-1}]}{(X_{k-1}^2-1)} \otimes \frac{\mathbb C[X_{k-2}]}{(X_{k-2}-1)(X_{k-2}-i)} \otimes \frac{\mathbb C[X_{k-3}]}{(X_{k-3}-1)(X_{k-3}-\zeta_{k-3})} \otimes \cdots \otimes \frac{\mathbb C[X_0]}{(X_0-1)(X_0-\zeta)}

mapping X \to X_0 \otimes X_1 \otimes \cdots \otimes X_{k-1} is then an isomorphism. It can be also be written more suggestively as :

\frac{\mathbb C[X]}{(X^n-1)} \simeq \frac{\mathbb C[X_{k-1}]}{(X_{k-1}^2-1)} \otimes \frac{\mathbb C[X_{k-2}^2]}{(X_{k-2}^4-1)} \otimes \frac{\mathbb C[X_{k-3}^4]}{(X_{k-3}^8-1)} \otimes \cdots \otimes \frac{\mathbb C[X_0^{n/2}]}{(X_0^n-1)}

although the isomorphism is less clear.

Suppose now that our vector is given in terms of a polynomial
f = \sum f_a X_0^{a_0} \cdots X_{k-1}^{a_{k-1}}
where a is written \overline{a_{k-1}\dots a_1 a_0} in binary form. Evaluation at roots of unity is given by a matrix \widetilde F_n, with coefficients \widetilde F_n[a,b] which are evaluation at \zeta^a of X_0^{b_0} \cdots X_{k-1}^{b_{k-1}}. Remember that X_q(\zeta^a) = \zeta^{a_q 2^q} (the X_q‘s decompose a in a sum of powers of two): \widetilde F_n[a,b] = \prod_q \zeta^{a_q b_q 2^q} and this is a tensor product of matrices

\widetilde{F_n}^{(q)} = \begin{pmatrix} 1 & 1 \\ 1 & \zeta^{2^q} \\ \end{pmatrix}

The decomposition property can be stated as the fact that functions on a product are the tensor product of rings of functions, and evaluation maps are tensor products of evaluation maps.

Why does it go faster this way?

Tensor products of matrices have the property A \otimes B = (A \otimes 1) \times (1 \otimes B) = (1 \otimes B) \times (A \otimes 1) where 1 is the identity matrix, which generalises straightforwardly to longer tensor products. Thus to apply \widetilde F_n to a vector x we only have to understand how to apply \widetilde F_n^{(q)}: we need to apply this 2×2 matrix to each pair (x_a,x_b) where a and b differ by the q-th bit. This needs 3 or 4 operations for each of the n/2 such pairs, so doing this for the k matrices only needs around nk or 2nk elementary operations on complex numbers, which is better and simpler the usual algorithms for matrix multiplication. This tensor product point of view is equivalent to more classical divide-and-conquer approach to fast algorithms.

Update: the FFT is actually not really a tensor product, because we first have to convert a polynomial f = \sum \phi_b X^b in the form \sum f_a X_0^{a_0} \cdots X_{k-1}^{a_{k-1}}. This can again be solved by divide-and-conquer techniques, by considering the following identities :

f(X^2) = \sum (\phi_b+\phi_{n/2+b}) X^{2b} = \sum_{\text{even } a} (f_a+f_{a+1}) (X_0^2)^{a_0} (X_1^2)^{a_1} \cdots (X_{k-2}^2)^{a_{k-2}}

f(\zeta X^2) = \sum (\phi_b - \phi_{n/2+b}) (\zeta X^2)^b = \sum_{\text{even } a} (f_a+ \zeta f_{a+1}) (X_0^2)^{a_0} (X_1^2)^{a_1} \cdots (X_{k-2}^2)^{a_{k-2}}

The base change matrix can thus be written as some sort of twisted tensor product, and needs only O(nk) operations. The traditional FFT algorithm for size 2^k actually does the two steps I described simultaneously.

Entry filed under: algebra, english, group theory, undergraduate. Tags: , , , , .

Schemes in algebraic geometry 2 : prime spectra and generic points Schemes in algebraic geometry 3 : glued schemes and sheaves

3 Comments Add your own

  • 1. Scott Carnahan  |  16 March 2009 at 4:16 am

    This is the best explanation I’ve read. Thanks! Applied mathematicians often call the matrix tensor product the Kronecker product, and a lot of math software implements it using a function called kron.

    Note: some of your older posts have lost their latex backslashes. I think this sometimes happens when WordPress uses an auto-saved copy.

    Reply
  • 2. Scott Carnahan  |  16 March 2009 at 4:36 pm

    I tried to make this work for n=4, but I was unsuccessful. I think the formula may be a little more complicated than a tensor product.

    Reply
    • 3. remyoudompheng  |  16 March 2009 at 9:21 pm

      I think I messed up something between sum and product, trying to fix this…

      Reply

Leave a comment

Trackback this post  |  Subscribe to the comments via RSS Feed


Pages

Categories