magnitude is defined to call
apply-generic with the argument given, which happens no matter what the number is. The error occurs because it cannot find an entry for
magnitude with the type
complex. The newly-added definitions are rather simple. All they do is call
real-part, etc.) again. However, this time the argument is different. Whenever
apply-generic is used, it strips off the initial type tag. Since complex numbers in the system have two type tags, the second call to
magnitude will retrieve the appropriate function for the secondary type (either
While it’s not a bad idea to go through the process of calling
magnitude manually, we can use a programmatic trace to verify that we’re on the right track with this line of thought. With it enabled, the series of procedure calls is as follows (this is not code, but the trace output with my notations):
We can see that the system goes through two calls to
apply-generic for the magnitude, once with a number tagged as
complex, and the second time with the number as
rectangular. The call with a
rectangular type then looks up the version of
magnitude that was added with the rectangular package. The third call to
magnitude in our trace is not the one defined at the top level, but the one inside
install-rectangular-package. (The trace indicates this by marking the source file and line number for the call; I’ve stripped that information out here.) There are also two more function calls that occur in the course of calculating the magnitude, which are still the versions local to the rectangular package.
By modifiying the suggested procedures, we can make a type check for numbers by explicitly using
number? instead of checking the type tag. Because we want to store the number as itself, we do not attach the tag in
attach-tag but simply return the contents for an actual number. Within
contents, we return the thing itself when it is a simple number (in fact, we could probably get away with just returning the contents if they are not a pair). The final change is to
type-tag. To keep the system working as before, we still need to indicate the tag of a regular number as a
A final note: These modifications do not actually change any numbers that are made using
make-scheme-number, but they do allow them to be used alongside bare numbers. All calculations involving them will now end up returning bare values, however, as
Since we need
equ? to work for each type in the system, we have to consider whether we need a version for each type or if one can cover them all. It turns out to be more sensible to apply different standards of equality based on the type. Here are the three versions:
Rationals are the easiest to explain. A simple version would compare numerator to numerator, and denominator to denominator, and define equality if they both match. However, what about a case like 6/8 compared to 3/4? These are equal values, but they would not be considered equal using that simple version. A better definition of equality allows for non-reduced rationals to be equal to each other, and the easy way to do this is cross-multiply the numerators and denominators: if the products are equal, the numbers are equal.
For ordinary numbers, it might seem easy enough to use the built-in equality operator and be done with it. However, because this is an arithmetic system where we’ll be performing calculations, we may want a slightly different definition of what equality is.
The reason for this has to do with how floating-point values are actually represented as data. A floating-point value is stored as a binary fraction. That fraction may not be able to exactly match the decimal value we want it to. After a calculation, the result might then be represented as a very tiny bit different than the exact expected value. Using a strict definition of equality would make it hard to use
equ? in a meaningful way.
It’ll be more useful to define two numbers as equal if they are within some very small range of each other. Ideally we would choose the maximum difference between a number and the decimal values that it approximates, to maintain precision. However, once we start performing calculations, the tiny differences can build up quickly, and the error in the result is difficult to predict. Using an arbitrary larger value will be sufficient for our purposes.
The complex numbers highlight the need for this approach. Our complex numbers internally use mathematical functions involving floating-point values. If we do not use an approximation comparison, it would be difficult to ever have a rectangular and a polar number that are equal to each other, which is a problem when the system ought not to care which underlying representation is used.
To test equality for complex numbers, the function
close-enough? is used. It compares the real parts and imaginary parts separately. By defining
equ? at only the level of
complex, we can compare any two complex numbers without caring about their secondary type. We could have used magnitude and angle; either way we only need one. This naturally relies on the additional selectors added in Exercise 2.79 to make the complex package work properly.
equ? defined, it’s pretty easy to see that
=zero? works along similar lines. Each type of number will have its own operator in the table.
For ordinary numbers, we merely need to see if it is close enough to 0.
Rationals and complex numbers are slightly different. For a rational number, only the numerator needs to be equal to zero, and so a simple check of that is sufficient. Complex numbers are considered to be zero if they have zero magnitude, and that is the check we make. In my solutions, I used the
=zero? operator in both of those definitions. This only works if we know that values returned by the
magnitude function will work in the arithmetic system, but since we added ordinary Scheme numbers, we know that they will at this point.
Extra: Expanding on testing and the zero-angle complex number problem
=zero? have been added to the system, it becomes possible to run the full set of tests. For each type of number, we run the basic arithmetic property tests, and then there are a few tests to check on actual computations. This isn’t terribly exhaustive, but it can catch at least some simple problems with the system.
One thing we discover is that the complex numbers (specifically, the rectangular ones) do not pass some of the tests. This problem was already mentioned in 2.4.2. The basic difficulty is this: using the simple conversion method, the real-imaginary number
0 + 0 i has an undefined angle value. Any calcuation that relies on computing its angle will cause an error.
My fix was to add a check in
angle that determines if the real and imaginary parts are both 0. If that is so, it just returns 0 for the angle.
At this point our complex numbers pass all the tests, but since we know this could be a problem, we ought to add a few tests for this specific situation. Why add more tests when we have some that cover the situation? The primary reason is that the tests themselves provide information on how the system works. Adding a specific test is a reminder that we require this functionality. It also allows for the tests to be changed without worrying that we could be losing a ‘hidden’ test.
Just adding one test can also make us think of others that we may want (sometimes we might not need to add the extra tests, but just the consideration can help in understanding how we actually want the system to work). For example, in this case I went on to add a few tests that describe how zero-valued complex numbers (and also
=zero?) should work.
Let’s look at what each of these tests does.
The first test makes sure that polar numbers with both values 0 work as expected. One could argue over whether this test is strictly necessary, since polar numbers are working correctly. In general, values of 0 tend to be problematic, so I’d keep it in. Adding it also made me realize that polar numbers of length 0 can have any angle. There is then the open question of whether such numbers should also be equal to each other, possibly by forcing their angle to 0.
The rectangular numbers are then checked, which is what the fix is intended to cover. But we also want to make sure that the fix works even when we don’t create the number explicitly. A zero-valued number can arise as the result of a calculation, and so the last test is there to catch that. [If you look at the implementation, the calculations actually end up calling the constructors. However, our tests should not make that assumption unless it is a critical feature of the system, and it’s debatable whether that is so.]
Over the course of the book we’ll see situations like this one, in which not all of a system’s behavior is delineated in the text. Finding ways to test it often raises questions that I answer for myself in the tests, but I would encourage you to alter and extend the tests as you see fit, since it is likely you will notice additional aspects that I have not considered.