Recursive algorithm
A recursive algorithm is an algorithm that calls itself. It has base case(s) for problems it can trivially solve, and recursive case(s) for problems that must be reduced in some way before being called on by the algorithm again. It's generally a good rule of thumb to use recursion when the answer to a problem can be constructed from the answers to similar subproblems.
Here's a recursive algorithm for computing factorials:
algorithm factorial is
input: an integer n ≥ 0
output: n!
if n = 0 (base case)
return 1
else (recursive case)
return n * factorial(n - 1)
The recursive algorithm for computing Fibonacci numbers calls itself more than once in its recursive case. Take a look:
algorithm fibonacci is
input: an integer n ≥ 0
output: the nth fibonacci number
if n = 0 (base case)
return 0
else if n = 1 (another base case)
return 1
else (recursive case)
return fibonacci(n - 1) + fibonacci(n - 2)
Unfortunately, directly adapting the recursive definition of the Fibonacci sequence into an algorithm doesn't really work. The above algorithm is terribly inefficient and causes plenty of redundant calls to be made. If you make note of each time the algorithm makes a recursive call, you'll see that many calls are for computing Fibonacci numbers that have already been previously computed. A more efficient, non-recursive implementation that "remembers" previously computed Fibonacci numbers is given below:
algorithm efficient_fibonacci is
input: an integer n ≥ 0
output: the nth fibonacci number
f ← [0, 1]
for i = 2 to n
append f[i - 1] + f[i - 2] to f
return f[n]
This bottom-up approach to computing Fibonacci numbers is an example of dynamic programming. It closely resembles how people actually compute Fibonacci numbers with a pen and paper. So, when designing algorithms, it might be helpful to think about how someone would practically solve the problem without a computer.