Quirksand

SICP

SICP 3.2.3 Exercises

October 28, 2017 11:05

Much like 3.2.2, this is another section with a single exercise involving environment diagrams. There will be a solution file with included code, merely to demonstrate the equivalence of the procedures. There isn’t much to say in advance of the exercises, so I’ll go over in more detail how I’ve drawn my environment diagrams. Here’s an example from the previous section, and I have a few more sample ones below.

The diagrams consist of frames, environment pointers, procedure objects, and my own evaluation boxes. Frames are boxes tinted blue, and contain a list of name-value pairs, i.e. ‘bindings’. The environment pointers may be attached to arrows, but are most often just labeled next to a frame with no arrow. The global environment’s frame is drawn using a larger box (as done in the text), and a lighter tint to show that it’s slightly special as the ‘base’ environment. The evaluation boxes indicate what is being evaluated when the diagram is depicting a point in time.

Initially the difference between environments and frames in the diagrams wasn’t entirely clear to me. The distinction is that a frame is a single box that contains variable bindings. The environment is the sequence of frames that can be followed back via pointers, which are represented by arrows. The names of the pointers are also synonymous with the ‘environment’. For example, when referring to Figure 3.1, the text says “environment D” to mean the environment that D points to. I don’t normally use labels on the arrows, or the initial arrow; the label to the left of a frame marks the start of an environment. Frames typically do not need to be named; in most cases, the number or name of the environment that begins at that frame can be used. For instance, in the diagram below, ‘Frame 21’ would refer to the box marked by E21. The environment E21 would be the sequence Frame 21 → Frame 2 → Global Frame.

Example of environments

For environment naming, the convention I typically follow is that the environment is assigned a number to distinguish it from others with the same parent, and a new digit is appended for each frame in the chain. This means that it’s usually possible to determine the ‘depth’ of an environment and also know its enclosing/parent environment just from the name. Environment ‘E213’ would be the third one created in the first environment of the second environment of the global environment. This doesn’t cover all cases, but the actual names of the environments are not themselves all that important; I just wanted something that could be mostly consistent and easy to come up with.

As I mentioned in the previous section, I also show what is in the process of being evaluated. The current code under evaluation is included in a yellow box next to the frame for it. Usually only one line of code will be written there, to indicate the line that is being evaluated. The name of the procedure (or a description, for lambdas) will be included as a label above the code box. There are many times when a procedure is in the middle of execution, but has called another procedure. In those cases, the code in the box will be slightly grayed out to show that it is not the ‘active’ frame. For a procedure that has completed evaluation, the yellow box will be smaller, and instead of code, the returned value will be shown inside it. The label on top will also use ‘<-’ to show that it is returning from the procedure.

Here is another example to clarify what happens with procedure calls. In this one, function fun2 has finished and is returning a value. The other function, fun1 is still in progress as it is awaiting the result of fun2. The definitions for the two procedures are not shown.

SICP

SICP 3.2.2 Solutions

October 22, 2017 11:23

In the following diagrams, the procedure objects are only shown one time, and then omitted in later diagrams. They are still there, it’s just not really necessary to show them each time, as that part of the diagram does not change. In our model, a procedure (the two-circle symbol with pointers to the parameter list and body, and its enclosing environment) is stored in memory somewhere. When the procedure is evaluated, that is done according to the bindings, but the stored procedure is not affected. Exactly what ‘evaluation’ is will be a major part of the next chapter; for now, we can consider that the evaluator just reads the procedure body and figures out which expression to evaluate next. In most cases I’ve added a box to my diagrams to show which line of the procedure is in the process of being evaluated.

Exercise 3.9.

Let’s go over the recursive version first. In the global environment, we have the single procedure factorial defined. We make a call to it with (factorial 6) and a new evaluation frame is created:

Factorial 6

In the course of evaluating (factorial 6), another call to factorial is made. Since factorial is defined in the global environment, the evaluation frame for any call to it is created there as well. This will proceed for each successive call to factorial, all the way to (factorial 1) at the end:

Maximum depth of factorial

Each procedure is still awaiting the value returned by the procedure calls after it. Once they return, they will finish computing the value and then return. The environment for that call can then be removed, and this will happen in the reverse order in which the environments were created.

For the iterative version, two procedures are defined in the global environment. Otherwise, the model looks initally quite similar to the recursive version, with only the code being evaluated that is different:

The first procedure calls a second one with different parameters. That means the next frame to be created will be for evaluating fact-iter with the product, counter, and max-count set to their starting values. Each call to fact-iter after that will result in a new call to fact-iter with different arguments, until it is called with counter higher than max-count (the greyed-out environments will be explained shortly):

All calls for factorial

The repeated calls once again produce new frames in the global environment. That means seven more frames are created. Why seven instead of six, as with the recursive version? That happens because of a slight difference in the way the two versions terminate. The recursive procedure counts down to 1, and then terminates at that point. The iterative version counts up, and does not terminate until the counter is past the maximum. It requires an extra call and thus one more evaluation frame. In total, eight frames are created for the iterative version.

However, as alluded to in the text’s footnote for this exercise, this model doesn’t quite tell the whole story. It turns out that it’s not really necessary for all the environments to be preserved in the iterative version. Each of these calls (save the last one) will do nothing but call a procedure and return the value of that call. Nothing in that environment will be needed again. When that is the case, it’s possible to allow that environment to disappear. Scheme actually requires this behavior It’s for this reason that I’ve grayed out all the frames — it’s not necessary for them to remain, and in fact, the interpreter will remove them. The recursive version cannot do this, since it must multiply the result by one of its own environment variables befoe returning.

Under this model, we see that the recursive version grows in space and then shrinks again. The iterative version takes up slightly more space initially, and has more calls (even with the difference in termination conditions), but it never actually requires more than one evaluation frame at a time. This mirrors the substitution model somewhat, which was shown in Figures 1.3 and 1.4 in the text.

There is no ‘solutions’ file for this section, since the procedures are explained in Section 1.2.1 in the text.

SICP

SICP 3.2 Exercises

October 15, 2017 11:25

The exercises in Section 3.2 (and some in Section 3.3) do not involve, or at least do not require, any code that needs to be verified by testing. The procedures are largely meant to be computed manually. Therefore there aren’t going to be any ‘exercise’ files for them. I will create solution files in order to demonstrate the code, but most of the actual answers are in understanding the environment model diagrams, or in working out by hand what the result is. Running it through the interpreter is as often as not just a means of finding the solution. The solutions will include my own diagrams in order to explain what happens at each step as well.

SICP

SICP 3.1.3 Solutions

August 6, 2017 11:01

Exercise 3.7.

This one doesn’t require modifying the account creation that already exists; what we’ll do instead is delegate any actions to the original account. Since that account is already considered to exist when we create a joint one, there is no need to do anything in make-joint other than to check the password and pass on the command to the original account if the password is correct. The joint account needs to work in the same way as the original, which means that make-joint must return a procedure. In this case, the ‘dispatch’ of that procedure only does one thing – check the joint accountholder’s password. If it matches, it makes a call to the original account using the original password.

There is one point of contention however. That concerns the response to invalid passwords. If make-account has a separate password-checking procedure from make-joint, then we potentially have two separate places where the checking code exist. Future changes, such as adding the ‘call-the-cops’ alarm for example, would require changes in both procedures. Doing the same thing in two places isn’t generally a good practice, especially when from an external point of view the entity (the ‘account’, whether joint or single) is expected to be the same. The users (‘bank customers’ in this case) should be able to interact with the joint account exactly as if it were a regular account. That goes even for improper interactions too; perhaps especially so.

There are several approaches to this problem. It might be feasible to delegate password responses to the original account, by delivering either a valid or invalid original password depending on whether the joint accountholder’s password is valid. We could also modify make-account to incorporate joint accounts as a special case, but there are many situations in which the original code can’t be easily modified. A robust way to deal with this is to have a separate password-checking processor. That way, both original accounts and joint accounts would always be working with the same system. Of course, that would require at least some changes to make-account, although to a lesser extent than adding in joint accounts as a special case would require.

Exercise 3.8.

If we’re considering that the addition will yield different results depending on the evaluation order of its arguments, then we have to think about what it means when we change the evaluation order. If we’re going ‘left-to-right’, that means the arguments are evaluated in sequence, with the leftmost argument is evaluated first, then the next one, and so on. If it’s ‘right-to-left’, then the rightmost argument is evaluated first, in reverse fashion from the other order. Which means what we really want is for the procedure f to give different results depending on what arguments it has been called with in the past.

In order for the results of a procedure to always be different depending on when it is called, we need some sort of stored state. (Randomness would give different resuls, but none that we could rely on). Our function f will be designed similarly to those in previous exercises, where a let variable is preserved locally in a lambda expression that is the required procedure.

Some open questions with the exercise are whether the procedure f only needs to exhibit this behavior one time, or if it can change when it’s repeated. We also don’t know if it can accept other input values besides 0 or 1. It’s definitely much easier to design one that can give the answer without any of those conditions; I originally had a solution that only fulfills the requirements of the text, but then came up with one that’s more versatile.

The first solution I have (a ‘filtering buffer’) will work only with the expressions as given in the text, and can hanle repeated calls as long as certain conditions are met. In that version the saved state is just a toggle that switches back and forth. It is thus only valid as long as the calls to f come in pairs, and the input should be limited to the values 0 and 1.

My second approach does not have limitations on its input, and works whatever the previous calls to f were. It saves two variables; one is the last input received, and another which keeps track of the last returned value. Here’s the procedure:

(define f
  (let ((returned 0)
        (last 0)
        )
        (lambda(x)
          (set! returned (+ (- returned) last))
          (set! last x)
          returned
          )
    )
  )

This function returns a value that is the negation of the most recently returned value, added to the most recent input. Suppose we call it with some value x as the input. It will save x as the last value and return some value, which we’ll call a. We can’t know what a is, since it depends on past calls. On the next call, it will return -a + x since x is currently stored as the last value. Therefore if we sum the results of two consecutive calls, we get aa + x, which is simply x. This procedure works in such a way that summing the result of any two sequential calls yields whatever the input to the first call was. In this exercise, that means our sum will come out to whatever argument is evaluated first, whether it be from the left or from the right.

Solutions 3.1.3.

SICP

SICP 3.1.3 Exercises

July 16, 2017 11:05

There are only two exercises here. The first one is an addition to code from previous exercises (Section 3.1.1 specifically). If you have not completed all the exercises for that part, you’ll need to do so, or download the solution file. The tests also rely on check-eqv? defined as it was in that file, so make sure it exists either in the old file or that you put it the new one. It’s a good idea to keep the old file as-is, with all its tests, so that if it somehow gets altered, it’ll be easier to determine where the error lies.

The second exercise initially seems difficult to test, since it asks for a function that changes its result depending on which order the arguments are evaluated in. In order to test it based on what we know how to do, there are two possible approaches. We’ll learn other ways to change evaluation order later The first is just to change the expression itself, so that (+ (f 0) (f 1)) is compared with (+ (f 1) (f 0)). It turns out that the Scheme language does not actually specify a particular order of argument evaluation. That means that while solutions to this exercise will show a difference, there’s no way to know which one is actually ‘right-to-left’ or ‘left-to-right’. The second approach is to use nested let statements. Each let works to evaluate an ‘argument’ as a variable for the let, and by changing the order of nesting, the order of evaluation is changed. This will give us a guaranteed order, since we are evaluating the arguments prior to the addition statement. By capturing the result of these let expressions, we can determine if the results are as expected.

Additionally, there’s an ‘optional’ part of the testing for the second exercise. The problem statement doesn’t clarify if the function only needs to work once, or work at any point in time. Adding more conditions makes the exercise considerably more difficult to solve, so it’s fine to ignore them, as they may not be an intended part of the exercise and thus don’t really need to be tested.

Exercises 3.1.3.

SICP

SICP 3.1.2 Solutions

June 11, 2017 10:50

Exercise 3.5.

This exercise has essentially two parts: the integration procedure itself, and then the means of applying it (which will also test that it functions correctly). I’ll discuss them separately.

Since we are using the Monte Carlo approach for integration, we first have to set up the parameters of a trial. We are given a hit/miss test, which is a procedure that returns true or false depending on whether the point is contained under the function or not. We pick random points using random-in-range with the range being x- and y-values of the bounding box. Then we run that through monte-carlo. The result will give us the fraction of points that were ‘hits’ in the defined region. To get the estimated area of our function, we multiply that fractional result by the total bounded area, which is just that of the rectangle defined by the corner points. As we test more points, the ratio of the valid points to the total points tested by the Monte Carlo trials approaches the ratio of the function’s area to the area of the bounding box, which is the desired integration result.

(define (estimate-integral P x1 x2 y1 y2 trials)
  (let ((total-area (* (- x2 x1) (- y2 y1)))
        (hit-fraction (monte-carlo trials (lambda () 
                                            (P (random-in-range x1 x2)
                                               (random-in-range y1 y2)
                                               )
                                            )
                                   )
                      )
        )
    (* hit-fraction total-area)
    )
  )

The application/testing portion is asking us to estimate the value of π by using the unit circle. Since our integrator simply measures the area given certain constraints, and π is part of the formula for the area of a circle, all we need to determine π is to run our integrator over a circle, and divide out the non-π portion of the area formula (i.e. radius squared). The bounding box can be set by adding and subtracting one radius length to the center of the circle, as that gives us a square that tightly encloses our circle.

We then need a proper predicate to test whether a point is within a circle. Given a point (x, y), we can determine whether it’s in a circle by checking to see if the point’s distance from the circle’s center is less than the radius of the circle. I wrote a generic procedure that can accept circles of any radius centered on other locations (though the variables are set external to the test itself). This does mean there is more computation involved for the test, but the procedure runs fast enough for me that the extra time is barely noticeable.

(define circleP (lambda (x y)
                  (< (+ (sqr (- x xc)) (sqr (- y yc))) r_squared) 
                  )
  )

This is a simple application of the distance formula, but to save time in the calculation, the value is compared to the square of the radius instead of taking the square root of the other side of the equation.

In testing, I added a few variations to see what effect it would have on the estimation. One thing that can change is the size of the circle. This doesn’t have a noticeable effect on how many trials it takes to get an accurate estimate. That make sense, since all that happens is both the estimated value and the expected value are scaled by some amount (although if you took this too far, at some point you could lose precision using floating point numbers, so you are in theory better off with a unit circle).

The other variation was to change the size of the bounding box relative to the circle. This results in a typically less accurate estimate for the same number of trials, when compared to the tighter bounding box. It’s not too surprising to see this result either. The Monte Carlo test works better as the proportion of hits approaches the actual proportion of the areas. With a lower chance to hit (because of a smaller ‘target’ area), the ratio of hits is going to be computed with fewer successes, and the expected error will be greater.

Exercise 3.6.

Structurally, the random generator first needs to determine whether it is generating or in reset mode. It can be structured similarly to the bank account procedure of Section 3.1.1, with dispatch to locally defined functions depending on the argument. There is a crucial difference, however — rand is called with the command directly, at least for generating values.

In the back account example, the internal state variable was defined within make-account, which returned a procedure that could be called with the commands; those actions were limited solely to that account (i.e. procedure). We have to come up with our own approach here. The ‘internal state variable’ referred to in the text needs to be defined separately if calls to rand are to work as specified.

One way would be to copy the bank account procedure, and create a pseudo-random number generator generator (and I think it’d be good to shorten the first part to PRNG, so we can call it a PRNG Generator). Then rand would just send calls to itself to the generated PRNG. That is what I did in my first solution, but to demonstrate an alternative way of doing things, I included another approach (I call these ‘Method 1’ and ‘Method 2’ in the solution file). In Method 2, I used an external (global) variable, which rand relies on. While that is simpler, the first way is arguably better, since whatever is keeping track of the random sequence is confined only to the rand function, and can’t be modified by anything else.

The definition of rand then resembles the definition of acc in the previous section — given a procedure make-prng that returns a procedure, rand is named using that procedure. Internally, calls to generate and reset are handled slightly differently, since once returns a value, and the other returns a procedure that accepts the new value; this is taken care of within the dispatch portion.

As for what is actually stored in the internal variable, it is the last random value, which can be sent to a procdure like rand-update that will generate the next value. In my version, there is an internal rand-update, whose parameters are determined at the time make-prng is called. Those paramters are used by a LCG (Linear Congruential Generator), the method mentioned in the text’s footnote. While it’s not a great means for simulating randomness, it is fairly easy to implement and work with. Here’s a brief explanation of the formula, which also includes the requirements to make a ‘good’ sequence by choosing the right parameters.

My second approach, as mentioned, uses an external variable. It also creates the PRNG in a different way. It turns out that Racket/Pretty Big actually has a PRNG Generator already, and it’s called make-pseudo-random-number-generator. Calls to random (the built-in function) can also specify which PRNG to use by adding it as an optional final argument. Other Scheme implementations may have a similar procedure with an alternate name (MIT Scheme, for example, has make-random-state).

To reset the state of our PRNG, we can use the built-in random-seed function and give it the reset value. I initially took this approach. Calls to reset provide a new (integer) value for the seed. The actual implementation in Racket is slightly more complicated, since random-seed can only be called on the default PRNG (the result of calls to random with no PRNG specified). To use random-seed with a different PRNG, one would have to temporarily set it as the default, and then restore the original when done.

I eventually switched away from this reset method. What I did instead was to pass an entire PRNG in a known state as the reset value. My Method 2 makes use of another procedure that copies the state of a PRNG without changing it. We can’t simply use set! on the PRNG variable, since otherwise any later call to the PRNG we passed would affect our random function, and destroy the ‘known state’ it had when created. This does complicate how one chooses a new state in some situations, although the text implies more concern about repeating a sequence than about picking reset states. As long as the PRNG isn’t used elsewhere, it will remain in the state it was.

In this case rand handles the dispatch directly, and it is a little shorter. However, there is the already-mentioned problem that the external variable has; it can be modified outside of the rand procedure, ruining the repeatability of the sequence. Obviously a hybrid approach can make use of both methods — keeping the PRNG contained only in rand, and making use of the language-supported means of generating the PRNG to reduce the work we need to do.

Solutions 3.1.2

SICP

SICP 3.1.2 Exercises

May 28, 2017 11:28

While this section is mainly about the benefits of assignment, once we start to test our procedures, one drawback is almost immediately apparent. The benefit we’re getting from assignment is that we can easily have functions where the output can vary. The problem for testing is that we’d rather have some expected value to check it against, which we can’t do if the output is unpredictable.

It might seem that one solution would be to see if the result is in some range, or allow for an approximate answer. This is not necessarily reliable, however, when dealing with random values. In many cases there is a small chance of the value being outside the expected range, even when working as intended. This might result in a test that only passes, say, 99.9% of the time. When it fails, however, we cannot know if that is due to some introduced bug, or simply because we hit the 0.1% chance that time.

Such systems are thus inherently impossible to perfectly test for correctness. This doesn’t mean there is no place for randomness in testing, or that we should abandon the attempt. In fact, the second exercise mentions using a reset for random-number generators Technically these are pseudorandom numbers in order to get a repeatable list of values, which is quite useful in exercising a system that has the ability to take a broad range of inputs. The arithmetic system from last chapter would be one example. Using a resettable list allows us to repeat the test exactly, to see if a possible bug persists when the code is changed. Even though the testing approach is incomplete, it can sometimes find limitations of the system that might otherwise go undetected.

Since the book doesn’t fully explain how to implement rand (or rand-update specifically), I’ll recommend that if you are using Dr. Racket’s “Pretty Big” (as I do) or MIT Scheme, you can use the random procedure that is built-in. Racket’s random does not work as required for random-in-range, however, so I’ve included an alternate implementation in the Exercise file. Consult the documentation for random to learn how to set a particular state. For Exercise 3.6, you may need to learn more about how pseudorandom numbers are typically generated, though the footnote from the text discussing rand-update gives an idea.

In the book, it is suggested that part of Exercise 3.5 is determining how to apply the integral test in order to check the value of pi. For this reason, the tests are incomplete in the exercise file. What I have done is write a procedure to make it easier to display the results of testing, which is listed in full below. What it does is take an estimator function, the actual value being tested, and the number of trials to input to the estimator. It will then report the calculated estimate (and optionally how close it comes to the actual value).

(define (monte-carlo-test trials estimator actual-value)
  (let ((result (estimator trials))
        )
  (display trials)
  (display " trials: ")
  (display result)
  (if showerror
      (begin
        (display " % Error: ")
        (display (* 100 (abs (/ (- actual-value result) actual-value))))  
        )
     )
    (newline)
    
    )
  )

The procedure estimator should take one argument, the number of trials, and return an estimate. An example is given in the file that uses estimate-integral and a triangle-shaped region. This should work as a guide to write your own tests using the unit circle in order to get an approximation of π. Additionally, there are a few more tests added to explore what happens to the estimate when certain parameters are varied, such as defining a different circle for area testing or altering the bounding box for the Monte Carlo integration.

Exercises 3.1.2

SICP

SICP 3.1.1 Solutions

April 2, 2017 17:49

Ex. 3.1.

Our procedure needs to produce something that is itself a procedure, since the accumulator adds the value when it is executed. To keep track of the value, the variable sum is adjusted using set!. Because sum is the argument originally passed to make-accumulator, it only exists locally in the accumulator that is produced from that call. It will therefore be independent of any other accumulators generated by this procedure, as each call creates a different local sum variable.

Ex. 3.2.

Here is the procedure in full:

(define (make-monitored proc)
  (let ((count 0))
    (define (how-many-calls?)
      count
      )
    
    (define (reset-count)
      (set! count 0)
      )
    
    (define (dispatch arg)
      (cond
        ((eq? arg 'how-many-calls?)  (how-many-calls?))
        ((eq? arg 'reset-count) (reset-count))
        (else
         (set! count (+ count 1))
         (proc arg)
         )
        )
      )
    dispatch
    )
  )

This is modeled on the bank account procedures from the text. First, a local storage variable for the count is created (using let). Then there are sub-procedures for each operation. Finally, there is the dispatch procedure, which is what is actually returned.

Using sub-procedures allows dispatch to be simpler in form, with just one line for each special action performed. When none of the special messages match, the default case occurs. That means the count is incremented and the original procedure is called. About the only difference between this dispatch procedure and the one in make-account from the text is that here the else-case performs a delegation to the monitored procedure, and is the ‘normal path’. In make-account the else is a catch-all for unrecognized input.

While this monitoring function definitely provides us a powerful method for tracking how procedures are used, it is a bit simplistic in this form. The most obvious drawback is that the monitored procedures must take one (and only one) argument. One way to deal with this limitation is demonstrated in the last exercise, when we put make-monitored to use for testing the solution.

Ex. 3.3.

My intent in adding the password feature to the account was to make the least amount of changes to the original make-account. To this end, only the new required password argument is added, and it is handled in an if statement, separate from the cond of the original procedure. This is probably an influence on me of working in other programming languages, as it would be functionally equivalent to include this case as the first line of the cond statement. Either way, the password is the first thing checked, and nothing else should be processed if it is not the one provided when the account was created.

A potentially tricky aspect of this procedure is deciding how the ‘complaint’ should be reported. The text suggests that a message is returned. However, in order for the procedure to function properly, we cannot just return a message straight from dispatch. The way the account is used is that each interaction should resolve to a procedure that operates on a numeric value. Even though in this case that value should be ignored, it’s still necessary to return a procedure. My returned procedure is a lambda expression that takes one argument but just spits out the complaint instead of doing anything with the argument.

Ex. 3.4.

Since the variable that counts the number of attempts must be internal to each account, we should set it up using a let statement, as was done in make-monitored. To keep this variable separated from the dispatched procedures, the let that defines it wraps only around the dispatch procedure itself. The password lockout exists in the ‘gateway’ portion of the verification process, and should only be defined and accessed at that point.

There are two mechanisms that must act on the attempt variable. Firstly, each failed attempt must increment it, so that when the limit is reached the call-the-cops procedure is invoked. Secondly, there needs to be a way to reset it, so that only consecutive failures are flagged. This demonstrates the value of local state variables — having the account proceed through multiple, nearly identical states until reaching failure would in general be unwieldy as well as unintuitive.

One question that is not handled in the text is what exactly should happen to the account once call-the-cops has been called. Most modern systems with a similar limitation on login attempts go into a lockout state and often require additional action (or a timeout period) to unlock the account. The text doesn’t really require it, but I implemented a lockout by using a flag to show that call-the-cops had already been invoked. Another approach might be just a check to see if the number of attempts has exceeded the limit, but I find using a second variable to convey the actual state of the account to be clearer.

To test it, we can use the make-monitored function, since it can monitor whether the alarm procedure was called. In order to use it, however, we need to jump through some hoops. As previously mentioned, make-monitored only works with arguments that take one procedure. To get around that, we do this:

(define old-ctc call-the-cops)

(define monitored-ctc (make-monitored 
                       (lambda (x) 
                         (old-ctc)
                         )
                       )
  )

(define (call-the-cops)
  (monitored-ctc null)
  )

To make the monitored version, we use a lambda to create a function of a single argument. But since we don’t ever need the argument, it ignores it and does nothing but run (the original) call-the-cops instead. When we use it, we now need to pass an argument to be monitored, which is simply going to be ignored. Note that we check the counts for testing by calling monitored-ctc with the special monitoring messages, not our new call-the-cops. This slight logical hole is fine as long as the new version is the only function that ever calls it.

The reason we had to redefine the procedure to call our monitored version is that the account will only call the procedure named call-the-cops. On the one hand, this makes it easy to test our code without having to change the account procedure. On the other hand, it demonstrates one problem that arises when procedures can be arbitrarily renamed — simply replacing call-the-cops with something else could render the account security procedure impotent. (This isn’t so much a worry about malicious hacking as it is about the ability of programmers to unintentionally wreck their own or others’ code).

Solutions 3.1.1