Quirksand

SICP

SICP 2.5.1 Solutions

August 9, 2015 10:28

Ex. 2.77

The procedure magnitude is defined to call apply-generic with the argument given, which happens no matter what the number is. The error occurs because it cannot find an entry for magnitude with the type complex. That’s what the additions to the package will add.

The new definitions are rather simple. All they do is call magnitude (or real-part, etc.) again. However, this time the argument is different. Whenever apply-generic is used, it strips off the initial type tag. Since complex numbers in the system have two type tags, the second call to magnitude will retrieve the appropriate function for the secondary type (either rectangular or polar).

While it’s not a bad idea to go through the process of calling magnitude manually, we can use a programmatic trace to verify that we’re on the right track with this line of thought. With it enabled, the series of procedure calls is as follows (this is not code, but the trace output with my notations):

(magnitude      (complex rectangular 3 . 4) )  <-- global version 
(apply-generic  magnitude (complex rectangular 3 . 4))
(magnitude      (rectangular 3 . 4))          <-- global version 
(apply-generic  magnitude (rectangular 3 . 4))
(magnitude   (3 . 4))  <-- local to package
(real-part  (3 . 4))   
(imag-part  (3 . 4)) 

We can see that the system goes through two calls to apply-generic for the magnitude, once with a number tagged as complex, and the second time with the number as rectangular. The call with a rectangular type then looks up the version of magnitude that was added with the rectangular package. The third call to magnitude in our trace is not the one defined at the top level, but the one inside install-rectangular-package. (Racket’s trace indicates this by marking the source file and line number for the call; I’ve stripped that information out here.) There are also two more function calls that occur in the course of calculating the magnitude, which are still the versions local to the rectangular package.

Ex. 2.78

By modifying the suggested procedures, we can make a type check for numbers by explicitly using number? instead of checking the type tag. Because we want to store the number as itself, we do not attach the tag in attach-tag but simply return the contents for an actual number. Within contents, we return the thing itself when it is a simple number (in fact, we could probably get away with just returning the contents if they are not a pair). The final change is to type-tag. To keep the system working as before, we still need to indicate the tag of a regular number as a scheme-number.

A final note: These modifications do not actually change any numbers that are made using make-scheme-number, but they do allow them to be used alongside bare numbers. All calculations involving them will now end up returning bare values, however, as show-basic-arithmetic demonstrates.

Ex. 2.79

Since we need equ? to work for each type in the system, we have to consider whether we need a version for each type or if one can cover them all. It turns out to be more sensible to apply different standards of equality based on the type. Here are the three versions:

; Rational
(put 'equ? '(rational rational)
       (lambda (a b) (= (* (numer a) (denom b)) (* (numer b) (denom a))))
       )
  )

; Ordinary/Scheme number
(put 'equ? '(scheme-number scheme-number)
     (lambda (a b)  (close-enough? a b))
     )

; Complex
(put 'equ? '(complex complex)
     (lambda (a b) (and (close-enough? (real-part a) (real-part b))
                        (close-enough? (imag-part a) (imag-part b))
                        )
       )                                    
     )

Rationals are the easiest to explain. A simple version would compare numerator to numerator, and denominator to denominator, and define equality if they both match. However, what about the fraction 6/8 compared to 3/4? These are equal in value, but they would not be considered equal using that simple version. A better definition of equality allows for non-reduced rationals to be equal to each other, and the easy way to do this is to cross-multiply the numerators and denominators: if the products are equal, the numbers are equal.

For ordinary numbers, it might seem easy enough to use the built-in equality operator and be done with it. However, because this is an arithmetic system where we’ll be performing calculations, we may want a slightly different definition of what equality is.

The reason for this has to do with how floating-point values are actually represented as data. A floating-point value is stored as a binary fraction. That fraction may not be able to exactly match the decimal value we want it to. After a calculation, the result might then be represented as a very tiny bit different than the exact expected value. Rationals don’t have a problem since the numerator and denominators must be integer values, and our calculations only deal with integer values. But using an operator like = with non-integer values would make it hard to use equ? in a meaningful way, since many comparisons would fail unexpectedly.

It’ll be more useful to define two numbers as equal if they are within some very small range of each other. Ideally we would choose the maximum difference between a number and the decimal values that it approximates, to maintain the highest precision. However, once we start performing calculations, the tiny differences can build up quickly, and the error in the result is difficult to predict. Using an arbitrary larger value will be sufficient for our purposes.

A more complete understanding of floating-point math in binary systems can be found by reading this classic article, or for a shorter and more simplified version, this website .

The package’s complex numbers illustrate the need for this approach. Our complex numbers internally use mathematical functions involving floating-point values. If we do not use an approximation comparison, it would be difficult to ever have a rectangular and a polar number that are equal to each other, since the conversion necessary to compare them may well introduce slightly different results. That’s a problem when the system ought not to care about which underlying representation is used.

To test equality for complex numbers, then, the same function close-enough? is used that is used for Scheme numbers. The real parts and imaginary parts are compared separately. By defining equ? at only the level of complex, we can compare any two complex numbers without caring about their secondary type. We could have used magnitude and angle; either way we only need one implementation. This naturally relies on the additional selectors added in Exercise 2.79 to make the complex package work properly.

Ex. 2.80

With equ? defined, it’s pretty easy to see that =zero? works along similar lines. Each type of number will have its own operator in the table.

For ordinary numbers, we merely need to see if it is close enough to 0.

Rationals and complex numbers are slightly different. For a rational number, only the numerator needs to be equal to zero, and so a simple check of that is sufficient. Complex numbers are considered to be zero if they have zero magnitude, and that is the check we make. In my solutions, I used the =zero? operator in both of those definitions. This only works if we know that values returned by the numer and magnitude function will work in the arithmetic system, but since we added ordinary Scheme numbers, we know that they will at this point.

Extra: Expanding on testing and the zero-angle complex number problem

Once equ? and =zero? have been added to the system, it becomes possible to run the full set of tests. For each type of number, we run the basic arithmetic property tests, and then there are a few tests to check on actual computations. This isn’t terribly exhaustive, but it can catch at least some simple problems with the system.

One thing we discover is that sometimes the complex numbers (specifically, the rectangular ones) do not pass some of the tests. This problem was already mentioned in 2.4.2. The basic difficulty is this: using the simple conversion method, the rectangular number 0 + 0 i has an undefined angle value. Any calculation that relies on computing its angle will cause an error.

My fix was to add a check in angle that determines if the real and imaginary parts are both 0. If that is so, it just returns 0 for the angle.

At this point our complex numbers pass all the tests, but since we know this could be a problem, we ought to add a few tests for this specific situation. Why add more tests when we already have some that will fail on these values? The primary reason is that the tests themselves provide information on how the system should work. The calculation tests are not specifically looking at zero valued complex numbers, and if they were somehow changed then this error could go undetected. Adding a specific test is a reminder that we require this functionality. It also allows for the tests to be changed without worrying that we could be losing a ‘hidden’ test.

Just adding one test can also make us think of others that we may want (sometimes we might not need to add the extra tests, but just the consideration can help in understanding how we actually want the system to work). For example, in this case I went on to add a few tests that describe how zero-valued complex numbers (and also =zero?) should work.

(define (zero-angle-tests)
  (let ((z-ma (make-complex-from-mag-ang 0 0))
        (z-alt (make-complex-from-mag-ang 0 2)) 
        (z-ri (make-complex-from-real-imag 0 0))
        (c1 (make-complex-from-real-imag 3 5))
        (nc1 (make-complex-from-real-imag -3 -5))
        )
    (displayln "Checking angle of zero-length complex numbers")
    (test-equ (lambda () (angle z-ma)) 0 "polar complex number with zero values has angle of 0")
    (test-equ (lambda () (angle z-alt)) 0 "polar complex number with non-zero angle value has angle of 0")  ; optional condition
    (test-true (lambda () (=zero? z-alt)) "polar complex number with non-zero angle is =zero?") ; optional condition
    (test-equ (lambda () (angle z-ri)) 0 "rect. complex number with zero values has angle of 0")
    (test-equ (lambda () (angle (add c1 nc1))) 0 "result of computation with rect. numbers has angle of 0")
    )
  )

Let’s look at what each of these tests does.

The first test makes sure that polar numbers with both values 0 work as expected. One could argue over whether this test is strictly necessary, since polar numbers are working correctly. In general, values of 0 tend to be problematic, so I’d keep it in.

Adding that test also made me realize that polar numbers of length 0 can have any angle. There remains the open question of whether such numbers should compare as zero-valued, and whether their angle should always be considered 0. The next two tests cover those choices and can be considered optional. My own solution doesn’t force the value to 0, for instance.

The next one checks that rectangular numbers have the correct angle value, which is the issue the fix is intended to cover. We also want to make sure that the fix works even when we don’t create the number explicitly. The last test is there to check a zero-valued number that arises as the result of a calculation still produces proper results. If you look at the implementation, you may note that the calculations actually end up calling the constructors, which would seem to make that test unnecessary. However, our tests should not make assumptions about implementation, unless it is a critical feature of the system. In this case it’s debatable whether that is so.

Over the course of the book we’ll see situations like this one, in which not all of a system’s behavior is delineated in the text. Finding ways to test it often raises questions that I answer for myself in the tests, but I would encourage you to alter and extend the tests as you see fit, since it is likely you will notice additional aspects that I have not considered.

Solutions 2.5.1