Quirksand

SICP

SICP 3.1.2 Exercises

May 28, 2017 11:28

While this section is mainly about the benefits of assignment, once we start to test our procedures, one drawback is almost immediately apparent. An effect of variable assignment is that we can easily have functions where the output can vary for the same input, at times even randomly. The problem for testing is that we’d rather have some expected value to check it against, which we can’t do if the output is unpredictable.

One way to deal with this might be to see if the result is in some range, or allow for an approximate answer. This is not necessarily reliable, however, when dealing with random values. In many cases there is a small chance of the value being outside the expected range, even when working as intended. That could result in a test that only passes, say, 99.9% of the time. When it fails, however, we cannot know if that is due to some introduced bug, or simply because we hit the 0.1% chance that time.

Such systems are thus inherently impossible to perfectly test for correctness. This doesn’t mean there is no place for randomness in testing, or that we should abandon the attempt. In fact, the second exercise mentions using a reset for random-number generators Technically these are pseudorandom numbers in order to get a repeatable list of values, which is quite useful in exercising a system that has the ability to take a broad range of inputs. The arithmetic system from last chapter would be one example. Using a resettable list allows us to repeat the test exactly, to see if a possible bug persists when the code is changed. Even though the testing approach is imperfect, it can sometimes find limitations of the system that might otherwise go undetected.

Since the book doesn’t fully explain how to implement rand (or rand-update specifically), I’ll recommend that if you are using Dr. Racket’s “Pretty Big” (as I do) or MIT Scheme, you can use the random procedure that is built-in. Racket’s random does not work as required for random-in-range, however, so I’ve included an alternate implementation in the Exercise file. Consult the documentation for random to learn how to set a particular state. For Exercise 3.6, you may need to learn more about how pseudorandom numbers are typically generated; the footnote from the text discussing rand-update gives some guidance.

In the book, it is suggested that part of Exercise 3.5 is determining how to apply the integral test in order to check the value of π. For this reason, the tests are incomplete in the exercise file. What I have done is write a procedure to make it easier to display the results of testing, which is listed in full below. What it does is take an estimator function, the actual value being tested, and the number of trials to input to the estimator. It will then report the calculated estimate (and optionally how close it comes to the actual value).

(define (monte-carlo-test trials estimator actual-value)
  (let ((result (estimator trials))
        )
    (display trials)
    (display " trials: ")
    (display result)
    (if showerror
        (begin
          (display "  % Error: ")
          (display (* 100 (abs (/ (- actual-value result) actual-value))))
          )
        )
    (newline)
    
    )
  )

The procedure estimator should take just one argument (the number of trials), and return an estimate. An example is given in the file that uses estimate-integral and a triangle-shaped region. This should work as a guide to write your own tests using the unit circle in order to get an approximation of π. Additionally, there are a few more tests added to explore what happens to the estimate when certain parameters are varied, such as defining a different circle for area testing or altering the bounding box for the Monte Carlo integration.

Exercises 3.1.2