Quirksand

SICP

SICP 1.3.1 Solutions

April 16, 2013 12:09

Ex 1.29

Writing the Simpson’s Rule solver isn’t all that difficult. Given the topic of discussion, it’s clear that using sum is the desired approach. Computing a single term of the series is perhaps a bit more complicated, but with that written, it’s still only one line to add up the series. I used one built-in function : add1 is the same as inc in some variations. It does as the name suggests and adds 1 to the value given (without relying on the + addition procedure).

For testing, the requirements are given in the book: Integrate cube from 0 to 1. Although the numbers given in the book for integral can be used directly for comparison, it’s easy enough to calculate them again. That’s also worth doing since you want to know that the rest of the math is functioning properly. It provides an extra check on sum. The computed error I see using Simpson’s Rule in both cases is down to the lowest printed decimal place. It is clear that Simpson’s converges much faster than the Trapezoid Rule.

Of course, it’s always interesting to see what happens in unusual cases. Two integrals that do not converge were also tested. The first one blows up quickly using Simpson’s Rule for me. DrRacket gives ‘+inf.0’ for the value, meaning positive infinity. The second is an integral that is harder to deal with numerically, as it goes to positive infinity on one side and negative infinity on the other. It’s a bit harder to get this one to fail, although it’s fairly obvious that it doesn’t converge since changing n by just one will change the result by quite a bit.

Ex 1.30

Comparing linear recursive and iterative processes is something we’ve done before. This time we see the practical effects that this can have on another procedure. The tests use the integral method, but it could be compared using any procedure that calls sum. It shows how building procedures that rely on ‘lower-level’ ones can be beneficial.

When I originally solved 1.29, I used an iterative procedure inside of the Simpson’s Rule solver as a matter of preference. I revised it to use sum as it seems logical from the book and also to lead into this exercise. Making sum run faster improves both of the integral solvers, and both could be tested for speed.

The downside of doing this is that any bugs in sum would also be part of the procedures relying on it, but as a practical matter, it’s usually better. If 50 procedures are all using one broken procedure, then it’s more likely to be discovered than if just a single one is using it. It’s can also be a lot less code to deal with if those 50 procedures were reduced to one line that calls a ten-line abstraction, instead of them having ten lines each.

Ex 1.31

Once product is written, the procedures that make use of it are short and easy to write. My version of factorial doesn’t even need to develop the term function separately, as it can make use of identity and add1. This is what was alluded to in the introduction to the exercises about needing to provide the right arguments. We need the identity procedure to fit the argument restrictions of product.

There’s a bit of a trade-off here. Using the abstraction gives us the benefits mentioned in the last exercise, but on occasion we have to add a bit of complexity over the approach of writing it as a more special case. Deciding how far to take this is a big question in programming. There’s no simple answer, and it’s generally a matter of philosophical preference.

In the second part, we simply modify the underlying procedure, and again test for accuracy and speed. This is where it becomes necessary to make changes if the language doesn’t allow for redefinition of procedures. Often languages (or Scheme implementations, even) with this restriction on modifying defined procedures provide some other means of allowing procedures to be altered. It might mean something like splitting into separate files, or reloading an environment with the new values.

Ex 1.32

It’s likely that when writing the product procedure this abstraction was already noted. The sum and product I wrote are almost identical except for the math function and the alternate branch when the sum is out of range. The step to the next abstraction merely involves replacing those two. In the second part, we once again swap iterative for recursive, and can test them using the same tests as in the first part, as the interface doesn’t change. The test function is now testing not just sum and product which use accumulate directly, but also integral and factorial that are built on those two.

One thing of note is that the book describes null-value as the ‘base-value to use when those terms run out’. It’s easy to do this with the math functions by understanding it as the (mathematical) identity for that operation. It’s harder to see what would happen if a more unusual null-value and next was used. That’s a case where the alternate implementation could well change the result. In truth it’s hard to come up with a convoluted method off the top of my head, but hopefully it can be seen that these abstractions could wind you up in trouble if exactly what is meant by them is improperly understood.

Ex 1.33

This exercise provides yet another abstraction that allows for some complex problems to be expressed relatively simply. Each segment of the abstraction can be broken down and provided separately. It’s not mentioned explicitly, but obviously accumulate above is a general version where the filter allows anything. As with the identity used previously we’d need to make a procedure that simply returns true, but it would allow us to rewrite accumulate using just one line.

The issue brought up in the last exercise about null values becomes more noticeable here. Here’s how I wrote filtered-accumulate:

(define (filtered-accumulate combiner filter null-value term a next b)
  (define (iter a partial)
    (if (> a b)
        partial
        (if (filter a)
            (iter (next a) (combiner (term a) partial))
            (iter (next a) partial)
            )
        )
    )
  (iter a null-value)
  )

Suppose, for instance, that the filtering was done in another way. Replace the false branch of (if (> a b) with a single call to iter that tests the filter in-line :

(iter 
    (next a)
    (combiner (if (filter a) (term a) null-value) partial)
)

In this one, null-value will be combined for any term that fails the filter. This change would have no effect on the existing sum and product methods. But it could have a big effect if the null-value is something that actually modified the series. Really, this is just an argument toward making sure the abstraction is understood. Is null-value something very much like a mathematical identity for combiner, or is it a ‘starting point’ for the series, or something else? Whatever decision is made by the developers of the abstraction is hopefully conveyed in some way to make the procedures sensible to those who might use them.

Solutions 1.3.1