Here are the two solutions, to help draw the comparison. First, the recursive version:
This is concise and quite clearly shows how the recurrence relation works. It’s practically the same statement as in the text for the function, translated to prefix notation. There are two cases, just as stated, and they are handled as expected. Taking care of the steps involved is done by the recursion.
Here’s the iterative version:
It’s still relatively short, and its heart (the call to fun-iter) is a fairly simple expression. Compared to the recursive version, it’s much harder to see the end result, however. Examining the [tail-recursive] call to itself in fun-iter, we can see it’s building terms up by summing the last three values, then sending the new value plus the last two on to the next iteration, while decreasing the index until it hits zero. It’s hardly even clear that the n < 3 case is covered, as no apparent checking of the condition exists. It’s an obfuscated version of what the book provided.
But there is also a big difference in the computation process, favoring the iterative version. The first version is, as this section is titled, tree recursive. It calls itself not just once but three times for each step down the process. Worse still, it’s repeating work. In order to calcuate
fun(- n 1) it goes back and calculates
fun( n – 3), etc. And then for
fun(- n 2) in the original call it goes back and does all those steps (save the first) again! It’s easy to write, but terribly inefficient in computation.
It’s so bad that n need not be too large before we give up waiting for the computer to process the answer. I included some numbers where I didn’t want to keep waiting on my computer. Yours may vary. The iterative version, comparatively, is super speedy even for much higher values.
There is one point where my particular version of the iterative process fails. The book uses the variable n which implies a function of the whole numbers, although it does not explicitly say it. The relation given actually works just fine for other values. This can be seen in that the recursive version handles such values without a problem. The iterative one takes a few shortcuts and doesn’t give the right result in those cases. Proper handling of such values would not ruin the main advantages of the approach, since it could be handled outside of the
Here we see again how it’s fairly easy to express this sort of problem using recursion. Only a few lines of code suffice, and they are pretty similar to the statement of Pascal’s triangle. To get a specific value, sum the two numbers above it. Obviously finding those two values means going to the numbers above them, etc. I’ve used the terms
indx to indicate the numbers.
row is the row in the tree and
indx is how far from the edge it is. For example,
(pascal 5 3) gives the third number in on the fifth row. I also set ‘invalid’ entries to be 0. It makes the expression simpler, since the recursion can sum those values with 1 to make the edges.
The recursive approach still runs into an efficiency problem, and to get really deep into the tree with reasonable speed would require a more optimized procedure.
No code here, just a mathematical proof. There are a couple exercises in the book that involve proofs. I’m not going to be especially rigorous about any of them, so hopefully you can understand how it goes from what I’m sketching.
First off, the statement to prove:
|Fib(n) is the closest integer to||φ n||where φ =||1 + √5|
In case you’re unfamiliar with φ , it is known as the Golden Ratio. It shows up in surprising places, including in the analysis of some natural processes, and is frequently used in artistic designs for its aesthetic appeal.
A more elegant expression for φ is as the positive root of this equation:
That’s also going to be useful for us. From this, multiply by φ to get:
φ 3 = φ 2 * φ
φ 3 =( φ + 1)* φ
φ 3 = φ 2 + φ
We could continue in like fashion multiplying by φ . The general statement, then, is that :
It is true for lower values of n as well, although we don’t really need that result.
As for ψ, it’s the other solution to the equation above. φ is the positive root, ψ is the negative root. The point is that the same result holds for ψ as for φ.
This little bit out of the way, we can start on our induction using the Fibonacci series.
|Assume Fib(k) =||φ k - ψk||and Fib(k-1) =||φ k-1 - ψ k-1|
Fib(k+1) = Fib(k) + Fib(k-1)by definition of Fibonacci numbers
|Fib(k+1) =||(φk - ψk)||+||(φk-1 - ψk-1)|
|Fib(k+1) =||(φk + φk-1)||-||(ψk + ψk-1)|
|∴ Fib(k+1) =||φk+1||-||ψk+1|
That’s one part of the induction. If it’s true for k and k-1, it’s true for k+1.
Next we prove the base cases:
For k = 0:
|Fib(0) ≟||φ0 - ψ0|
|0 ≟||1 - 1|
0 = 0
For k = 1:
|Fib(1) ≟||φ1 - ψ1|
|1 ≟||1 + √5 - (1 - √5)|
1 = 1
|∴ Fib(n) =||φn||-||ψn|
The ψ term on the right is the difference between Fib(n) and the value we want to prove it is close to.
Now the value of ψ is actually around -0.618, which has an absolute value less than 1. Raised to any positive power, that’ll still be less than 1. The value of √5 is a little over 2.
|| ψn|||<||1||for any n >= 0|
Which means that ( φ n)/√5 is never more than 1/2 away from Fib(n). In other words, there can’t be any integer closer than Fib(n). QED.