Fibonacci Number

The Fibonacci Sequence is pretty cool. It is defined by a simple recurrence:

$$
\begin{eqnarray}
F_0 & = & 0 \\
F_1 & = & 1 \\
F_n & = & F_{n-2} + F_{n-1}
\end{eqnarray}
$$

From this recurrence, we can calculate the first few terms of the Fibonacci Sequence very easily. $$\{F_n | n \in \mathbb{N}^0\} = \{0,1,1,2,3,5,8,13,\ldots\}$$

Ways to calculate the $n^{\text{th}}$ Fibonacci Number, $F_n$

Recursion:

If you’re a human, recursion is probably not how you calculate $F_n$. However, it is a fairly simple approach. Using the definition above, we can calculate $F_n$ as:

$$
\begin{eqnarray}
F_n & = & F_{n-2} + F_{n-1} \\
& = & F_{n-4} + F_{n-3} + F_{n-3} + F_{n-2} \\
& = & F_{n-6} + F_{n-5} + F_{n-5} + F_{n-4} + F_{n-5} + F_{n-4} + F_{n-4} + F_{n-3} \\
& \vdots & \\
& = & F_0 + F_1 + F_0 + F_1 + \cdots
\end{eqnarray}
$$

As a simple example, consider the recursion tree for calculating $F_5$.

$$
\begin{eqnarray}
F_5 & = & F_3 + F_4 \\
& = & F_1 + F_2 + F_2 + F_3 \\
& = & 1 + F_0 + F_1 + F_0 + F_1 + F_1 + F_2 \\
& = & 1 + 0 + 1 + 0 + 1 + 1 + F_0 + F_1 \\
& = & 4 + 0 + 1 \\
& = & 5
\end{eqnarray}
$$

While this method works, it’s clearly not very efficient since each step spawns approximately two more steps. It turns out this recursive method is exponential in both time and space, so it’s not a good method of coding. Later, I will provide some analysis of somewhat strict upper bounds for the time and space of calculation.

Dynamic Programming (DP):

This method gets its name from the computer science term “dynamic programming” which is a completely misleading name that refers to solving a problem from the bottom up. This method is fairly simple to write and understand. Take an empty list and fill in the terms $F_0$ and $F_1$ as the $0^{\text{th}}$ and $1^{\text{st}}$ terms. Now for the $n^{\text{th}}$ therm, simply add the two previous terms. For example, to calculate $F_5$, we could do the following:

$$
\begin{array}{|c|c|c|c|c|c|}
0 & 1 & – & – & – & – \\ \hline
0 & 1 & 1 & – & – & – \\ \hline
0 & 1 & 1 & 2 & – & – \\ \hline
0 & 1 & 1 & 2 & 3 & – \\ \hline
0 & 1 & 1 & 2 & 3 & 5 \\
\end{array}
$$

The nice thing about dynamic programming (bottom-up approach) is that you don’t have to recalculate things that have already been calculated. Remember how many calls were needed to $F_2$ in the recursion example? Clearly, $F_n$ takes $O(n)$ time and $O(n)$ space to compute. Can we do any better?

DP with Sliding Window (Iteration):

The dynamic programming approach given above is clearly superior to the recursion approach. But is it possible to improve the space requirements? Well, if you notice, when calculating $F_n$, only $F_{n-2}$ and $F_{n-1}$ are needed. So we could easily throw out the earlier terms, which means we really only need space for $3$ variables. Finding $F_5$ could look like this:

$$
\begin{array}{|c|c|c|}
0 & 1 & – \\ \hline
0 & 1 & 1 \\ \hline
1 & 1 & 2 \\ \hline
1 & 2 & 3 \\ \hline
2 & 3 & 5 \\
\end{array}
$$

Notice that at each step, we only require $3$ spaces of memory. In fact, this is probably the method most people would use to calculate the $n^{\text{th}}$ Fibonacci number. Note that while the space requirements are improved ($O(1)$), the time complexity remains at $O(n)$.

Matrix Multiplication:

Suppose we look at two equations that are clearly correct:

$$
\begin{eqnarray}
F_{n+1} &=& F_{n} + F_{n-1} \\
F_{n} &=& F_{n} + 0
\end{eqnarray}
$$

This sort of looks like a system of equations that can be represented with matrices.
$$
\left(
\begin{array}{c}
F_{n+1} \\
F_{n} \\
\end{array}
\right) =
\left(
\begin{array}{c c}
1 & 1 \\
1 & 0 \\
\end{array}
\right)
\left(
\begin{array}{c}
F_{n} \\
F_{n-1} \\
\end{array}
\right)
$$

Now with these matrix equations, we can see how to find $F_n$. Note how the matrix:

$$
M =
\left(
\begin{array}{c c}
1 & 1 \\
1 & 0 \\
\end{array}
\right)
$$

Transforms the $\left( \begin{array}{c} F_{k+1} \\ F_{k} \end{array} \right)$ vector into the $\left( \begin{array}{c} F_{k+2} \\ F_{k+1} \end{array} \right)$ vector. Essentially, it iterates the Fibonacci numbers. That means multiple successions of multiplication by $M$ in front ($M^n$) will find $F_n$. In fact, let’s look at an example:

$$
M^3 =
\left(
\begin{array}{c c}
3 & 2 \\
2 & 1 \\
\end{array}
\right) =
\left(
\begin{array}{c c}
F_4 & F_3 \\
F_3 & F_2 \\
\end{array}
\right)
$$

This suggests:

$$
\left(
\begin{array}{c c}
F_{n+1} & F_{n} \\
F_{n} & F_{n-1} \\
\end{array}
\right) =
\left(
\begin{array}{c c}
1 & 1 \\
1 & 0 \\
\end{array}
\right)^n
$$

We can easily prove this with induction.

Base Case $n=1$:

$$
\left(
\begin{array}{c c}
F_{2} & F_{1} \\
F_{1} & F_{0} \\
\end{array}
\right) =
\left(
\begin{array}{c c}
1 & 1 \\
1 & 0 \\
\end{array}
\right)^1
\;\;\;\; \checkmark
$$

Inductive Step:

$$
\left(
\begin{array}{c c}
1 & 1 \\
1 & 0 \\
\end{array}
\right)
\left(
\begin{array}{c c}
F_{n+1} & F_{n} \\
F_{n} & F_{n-1} \\
\end{array}
\right) =
\left(
\begin{array}{c c}
F_{n+1} + F_{n} & F_{n} + F_{n-1} \\
F_{n+1} + 0 & F_{n} + 0 \\
\end{array}
\right) =
\left(
\begin{array}{c c}
F_{n+2} & F_{n+1} \\
F_{n+1} & F_{n} \\
\end{array}
\right)
\;\;\;\; \checkmark
$$
$$
\blacksquare
$$

This is a pretty nice fact! It means we can easily find $F_n$ by taking the $n^{\text{th}}$ power of a simple $2 \times 2$ matrix. Now clearly we can calculate $F_n$ in $O(1)$ space and $O(n)$ time. However, this isn’t a gain over the sliding window method. In fact, the added complexity makes this method more wasteful.

The cool thing is that finding the power of something can be done in logarithmic time because $M^n = M^{\frac{n}{2}}M^{\frac{n}{2}}$. We only need to calculate $M^{\frac{n}{2}}$ once, and so we only need to perform $O(\log_2 n)$ multiplications. Thus the space complexity can be maintained at $O(1)$ (by working upwards), while the time complexity can be reduced to $O(\log_2 n)$ for decent $n$. Note that at larger $n$, multiplication is not an $O(1)$ operation.