This section, on orders of growth, is merely a page in the book. But it’s the barest of introductions to a complex and much-studied subject in computer science. These two exercises are among the trickiest in the whole book, although they involve essentially no programming. The solutions are actually quite complicated. Still, it’s a useful topic to at least have some idea about.
Since the next topic will probably be very lengthy, I’ll go into the subject of the notation used for talking about these orders of growth, since it’s something not covered in the book.
While the book only mentions one, there are several ways of talking about orders of growth. All refer to one function compared to another. For two functions f and g, we can talk about f = O(g), or f = Ω(g), or f = Θ(g). It’s worth noting that those are not exclusive terms, Case is important; ω(g) and o(g) mean something else, which I won’t get into as will become clear.
A formal statement of f being O(g) (known as ‘Big-O’) would be
Let f (x) and g (x) be functions with a positive domain and range. f is O(g) if f(x) <= c * g(x), where c is a positive real number.
In other words, if f = O(g), f does not ‘grow’ any faster than g. If I say something is O(n2), it could be scaled (by a single value over its whole range) to be always less than or equal to n2, no matter how big it gets.
The analogy is to >=, with a constant scaling factor. This is how the other two can be described. If O is like >=, Ω is just going the other way, like <=. Θ is like =; the usual definition of Θ is that f is Θ(g) if it is both O(g) and Ω(g). Thus when SICP uses Θ they are talking about the ‘actual’ order of growth, although they normally only ask for estimates of it.
The difficulty with Θ is that it may be hard to show. Big-O is the most used in computer science, since any discussion is usually comparing different algorithms. Ensuring that one is never worse than another is often the most useful thing to know. Ω tells some information (“you can’t do better than this”) though its application is mainly in proving Θ. The result is that for the purposes of estimation on these exercises, I’ll talk about something being O(g), even if Θ(g) might be true.
The final point to note in practical applications is that these comparisons apply over the whole function. Big-O considers only the ‘worst-case’ and asymptotic limit. It might well only come into play for impractically high values, and so does not necessarily imply that an algorithm will be the best fit based on these calculations. But it is useful in understanding how some problems might scale.