SICP

# SICP 2.2.3 Solutions

July 31, 2014 11:06

### Ex 2.33.

These aren’t overly complex statements, and they give a good idea of how these list manipulations can all be reduced to the simple operation of building up a new list after processing one element at a time. It’s worth realizing that because `accumulate`

works recursively, the rebuilt list maintains the original order of the list just by using `cons`

(it makes sense when you think it through).

With `map`

, it’s a matter of applying the procedure and adding the result to the newly-built list. The accumulator version of `append`

is the same as rebuilding with `cons`

, but we reverse the order so that the appended list is placed last (I actually got this wrong by forgetting how built-in `append`

orders the arguments until I tested against it).

Finally, `length`

is an alternate sort of operation, where we don’t care about the elements themselves. In this case, just adding 1 to a sum for each element will suffice.

### Ex 2.34.

This one can be a little tricky to understand, but can be figured out by working backwards from the expression in the text. We need a procedure that will work at every step, and therefore we look at what happens in the last step. At that point we add *a*(0) to the end of an expression that is made up of many previously calculated terms multiplied by *x*. So we have to use addition of the coefficient in our expression, and will need to multiply *x* by something. A bit of work reveals that using merely the expression `higher-terms`

(i.e. the accumulated result) as that something will work.

### Ex 2.35.

Accumulate lets us simplify several of our previously defined procedures, and here we give the treatment to `count-leaves`

. Given a bit of leeway with the expression, there may be more than one way to go about it, but all will have similar sorts of tests. Since we’re going for a numerical count, it makes sense to use addition as our accumulating operation, and an initial value of 0. The map operation needs to get the count of leaves at each node (element of the list). We test for whether the node is a leaf or not (using `pair?`

), and add 1 to the count if it is, and if not, we need to get the number of leaves on the subtree, by calling `count-leaves`

recursively.

### Ex 2.36.

Another exercise where we just need to fill in the blank with what’s needed for the procedure. In the first line, `accumulate`

needs a list of values to operate on, which according to the definition should be the first elements of each list. In the second line, `accumulate-n`

needs a list of lists to work on, which we expect to be the remainder of each list that hasn’t been operated on yet. For both cases, `map`

will work. The procedure passed as the argument will be the `car`

of each list in the first line, and the `cdr`

for the remainder of each list in the second. Recall that this works because `map`

can operate on multiple lists (by using dotted-tail notation, which would also be a way of implementing a more generic `accumulate-n`

).

### Ex 2.37.

Still more fill-in-the-blanks, this time for matrix operations. These build on each other nicely if you simply examine the definitions given for each operation expressed as a certain set of terms.

For `matrix-*-vector`

, for instance, we want a vector (list) in which each term is a sum of particular terms from the matrix and vector. This is handled by the dot product routine already defined; it’s simply a matter of getting the index correct. As our matrix is already expressed in terms of rows, the index *i* matches the rows in order. We can just take each row at a time for the map, and use `(dot-product row v)`

to get the *i*th entry in the result.

Transposition is easily accomplished with `accumulate-n`

, again because of the way the rows are stored as a list. The way `accumulate-n`

works, by processing one item from each list at a time, we need only `cons`

the elements up into the new rows of our transposed matrix.

Finally matrix multiplication can make use of `matrix-*-vector`

, by taking our transposed `cols`

and multiplying them by the rows of `m`

. A more lengthy alternative, more similar to the way matrices are multiplied by hand would be to go back to using `dot-product`

and another `map`

internally. It would amount to the same process, though, and given the blanks we are to fill in, this approach seems the most concise.

### Ex 2.38.

There is already a good name in mathematics for the property of an operation that yields the same result regardless of the order of the arguments — we call it commutative. A function that commutes will be sufficient, like addition or the Boolean test `and`

.

Finding good examples outside of arithmetic or Boolean algebra can be a little tricky. In the first place, for any fold operation to work, the result of the operator must be something that itself can be input to the operator, and such functions are rare (although many operations can be forced into that mode). One that comes to mind is something that creates a data structure, such as a tree, from a list of input. Although even with typical trees it’s not always the case that their construction isn’t dependent on the order of the input.

### Ex 2.39.

If we move from right to left for reversal, we’ll want to put each new element at the end of the resultant list, and so we use `apppend`

to accumulate the result. Heading in the other direction (as with `fold-left`

) the elements from the original list are added to the front of the resultant list, and so we use `cons`

instead.

### Ex 2.40.

This doesn’t involve anything more than paying attention to what was done in the text. This problem was solved and already written out — twice, in fact — at the start of this subsection. The first time was to introduce `flatmap`

, and the second within the definition of `prime-sum-pairs`

. Either one will work (though of course using `flatmap`

is shorter).

As for the second part of the question, once we’ve pulled the relevant part out of `prime-sum-pairs`

, we simply re-insert our new procedure back in the same place, with the appropriate argument names.

I treat this as a relatively trivial task, but identifying areas within functions that can be pulled out and fashioned into functions in their own right is a fairly common task in programming. It’s a stylistic choice as to whether one should do this in cases where it does not reduce the amount of code, but merely simplifies the description at one level. It seems to me that the style of breaking a procedure into smaller named chunks was a bit more in fashion when this book was originally written, but that’s not necessarily a reason not to adopt it if you find that is makes your code more comprehensible.

### Ex 2.41.

I approached this by building on the previous exercise. We need to filter the set of triples for those that sum to the target value *s*. Since the triples are ‘distinct integers’, it makes sense to first create a function `unique-triples`

that handles this.

Flatmap comes to our help here; we’re just adding another nesting to the process of picking pairs. Since `unique-pairs`

was already written, picking all unique pairs less than *i* builds this list easily.

At this point getting the desired set of triples is a matter of using `filter`

, along with a filtering function that tests that each member of the triple sums to the value *s*. I realize after writing it that even this test could be simplified using `accumulate`

/`foldr`

to calculate the sum (technically that doesn’t verify that the sequence is a triple, but neither does my existing solution).

### Ex 2.42.

The first two procedures aren’t too tough to make: an empty board can be an empty list, and `adjoin-position`

adds a position with a new row and column in whatever fashion you prefer. Some sort of list or pair is the most likely candidate.

I additionally added selectors `rank`

and `file`

to make it easier to refer to the positions. The terms ‘row’ and ‘column’ would also work, although I went with the chessboard convention. Technically for this puzzle, it wouldn’t matter too much if these were in a different order, as long as it’s consistent in your implementation. A transposed solution to the queens puzzle is just as valid.

The test for a safe position is a more complex procedure. It can be broken down into two parts: First, we have to locate the queen in the *k*th column given a full board, and secondly, we need to check it against all other placed queens.

I did each portion fairly generically; I think some speed-up could be achieved based on how this solution is created, but it was actually easier for me not to worry about that aspect and consider `safe?`

as a routine in isolation.

To find a queen in the *k*th column, I wrote `first-match`

, which will process a list and return the first element for which a passed procedure returns true. For our concerns, we only need to pick one queen in the requested column, for if there are others the check will discover them and simply return false. Thus taking the first one seen when searching through the placed queens will work (and if no queen is found, we trivially return true, as there is no queen to be under attack).

Once we have a queen’s location, we need to test that nothing is attacking it. Naturally we expect a series of conditionals, and here’s the relevant portion of the test:

I give the name `this-square`

to the position of a queen that’s on the board. The `check-square`

is the location of the queen whose safety is being tested. The first conditional is to discard the queen under test herself (since she would be in the list of placed queens). Next we test if the files match, and then the ranks. Either of those means the queen is under attack on a straight line. Finally, there is the diagonal test, which handles all diagonals easily: if they are the same distance away both in rank and file in any direction, then they’ll be on a diagonal and therefore under attack. Technically there could be a piece in between the two in any of these cases that would block a direct attack, but since all pieces on the board are queens it makes no difference.

If none of those conditionals fail, the remaining positions will be checked. If none of those find any attackers, then we can say we’re safe.

Ensuring that the whole system works is done by comparing with a list of known solutions . For larger boards, it’s best to just look at the number of solutions calculated. For the traditional 8×8 chessboard, the expected number is 92 (this algorithm does not exclude rotations), but as the board size grows the number of solutions grows exponentially, a detail that becomes critical in the next exercise.

### Ex 2.43.

Whenever there’s a nested loop, the inner loop is necessarily going to be iterated more often then the outer one, so if any optimization can be done, it should be to speed up the inner loop. Looking at Louis’s version, we find that the inner loop calls `queen-cols`

, instead of `enumerate-interval`

as in the original version. The latter is a much quicker procedure, since it simply returns a short list. The `queen-cols`

procedure, on the other hand, is a complex and recursive operation that takes a lot more computation time than enumerating the interval. There’s a lot of effort being wasted in computing `queen-cols`

for every step up to the board size.

To get a handle on how much extra effort, we can see that what was a single call to `queen-cols`

for a given value of *k* becomes *n* calls for *n* in 1 to *k*. This is (in Big-O notation) **O**(*k* ^{k}). Getting an exact ratio in terms of our original **T** is a bit tough to calculate, but we can be reasonably sure it’s not going to be *worse* than **T** ^{k}. This also assumes that `enumerate-interval`

is trivially short compared to `queen-cols`

, which actually is probably close enough to being true.

Experimenting by counting the actual calls to the function reveals that, using Louis’s routine, the number of calls for a board size of *k* is in fact (*k* ^{k + 1} -1)/(*k* – 1), which is indeed on the order of *k* ^{k}. But testing against actual processing time shows that the amount of time is not increasing quite so exponentially, so we are not doing as poorly as we might have expected (some of this could well be the interpreter saving us from ourselves).

I also did a test in this section that verifies we’re getting the same answer with Louis’s routine. This is both a check on his work but elso to ensure that I entered it correctly. Here I impose a sorting on each list of solutions, in order to compare them exactly. It’s one way of finding out if two lists in random order match (although not all lists can be easily or quickly ordered).