(This is a snapshot of my old weblog. New posts and selected republished essays can be found at raganwald.com.)

Friday, September 15, 2006
  Scientists Announce Empirical Evidence for Greenspun's Tenth Rule

It's Friday, so let's have some fun.

Sinclair Schuller posted an interesting article, How to Rewrite Standard Recursion through a State Stack & Iteration (the original page has been removed). In his own words:
Recursion, as a construct, is quite beautiful. It offers an elegant means of achieving an algorithmic goal and is used in everything from mathematics to text processing and data structure manipulation. The problem is, using it in practice through today's popular languages (such as my favorite, C#) can prove to be a disaster.
The problem, quite simply, is that many flavours of Blub popular languages have a limitation on how deeply you can make function calls before returning. That limit is far smaller than the limitations placed on the size of data set you can manipulate.

Door to Nowhere
Door to Nowhere, originally uploaded by x180. Courtesy James Duncan Davidson

Recursive functions often have the property that they require storage On. Therefore, if the number of function calls you can make before returning is fixed, the size of data set you can manipulate is fixed.

Yes, I said data set. That's very interesting, especially when you consider that Sinclair's example, the factorial function, doesn't really manipulate what most people would think of as a data set. But it really is, because if you were to think of 100! as a macro instead of a function, you would end up with 1 x 2 x 3 x 4 x 5... x 99 x 100. So there really is a set of 100 datums to manipulate.

Now back to Sinclair's example. He shows how to emulate a programming language's call stack. This is Greenspunning at its finest:
Any sufficiently complicated C or Fortran program contains an ad hoc, informally-specified, bug-ridden, slow implementation of half of Common Lisp.
Sinclair's transformation implements a better programming language (like Lisp) that does not have an arbitrary limit on the size of the stack it uses for function calls. This used to be a neat programming challenge. Old-timers will remember when certain languages (BASIC, FORTRAN) did not support re-entrant functions or procedures. So if you wanted to do something recursively, you had to Greenspun. (re-)Discovering this technique in order to write a "Towers of Hanoi" program was an early "Aha!" moment for me.

I call such a transformation a pseudo-iteration. It isn't really iterative, as the presence of a stack shows. Instead, it is emulating or interpreting a recursive algorithm. Sinclair also shows how to optimize the algorithm so that it always executes in constant space (O1).

I consider this a variation on Greenspunning where instead of implementing half of Common Lisp, we implement half of Scheme. Scheme is a member of the Lisp family that can optimize tail-recursive functions. (So can most Common Lisp implementations. Maybe. As Wikipedia says: The Scheme standards documents require tail-call optimization, which the CL standard does not. Most CL implementations do offer tail-call optimization, although often only when the programmer uses an optimization directive.)

Let's look at what's going on with some Ruby. The first thing we need is a factorial function:
def factorial n # whole numbers only
(([0,1].include? n) && 1) || (n * (factorial (n - 1)))
(If you aren’t familiar with Ruby, what you need to know is that the && operator returns its last argument if all of its arguments are truthy (neither nil nor false) and the || operator returns its first truthy argument.)

Okay, so the first thing we do is convert this factorial function into tail recursive form:
def factorial n, acc = 1
((0 == n) && acc) || (factorial (n - 1), (n * acc))
As you already know (or have learned by following the links to an explanation of tail call optimization), this version of factorial has its call in tail position. That means that every time it calls itself (just once), it returns the result of the call directly. The first version of the function didn't return the result directly; it called itself and then performed a multiply on the result.

Because it does no further work with the result of the recursive call, there is no need to store the calling parameters or the result on a stack. What you do is this. You set up a loop. Inside each iteration you perturb the parameters you have received. When the loop is done, you return some expression on the final set of parameters.

As it happens, factorial is easy in this respect because the final return is simply the accumulator we had been carrying around as a parameter in the tail recursive algorithm.

Let's perform the conversion, using this approach to omit the stack:
def factorial n
acc = 1
until 0 == n
acc *= n
n -= 1
We have progressed beyond half of Common Lisp into half of Scheme. Now, like a properly optimizing environment, we have greater performance and we require constant memory.

Only we have done by hand the work that a machine (you know, a labour-saving device) could have done for us, had we used a programming language of 1970s vintage.

How quaint.

Update: C# and Java Leak

Why is this kind of manual transformation necessary? In the end, it is because the so-called modern programming languages are so obsessed with copying popular languages that came before them (like C and C++) that they copied the bugs as well as the features.

The present an abstraction, function calling, but the abstraction leaks badly: you can't actually use it for non-trivial data sets. To me, this is kith and kin with the leaky abstraction known as primitive data types.
The only way to deal with the leaks competently is to learn about how the abstractions work and what they are abstracting. So the abstractions save us time working, but they don't save us time learning.

And all this means that paradoxically, even as we have higher and higher level programming tools with better and better abstractions, becoming a proficient programmer is getting harder and harder.
Joel Spolsky

I am deeply dissappointed that programmers need to do this kind of work by hand in this day and age. We have known how to release function calls from the tyranny of a fixed stack for decades. We have known how to optimize tail calls for decades.

No matter how simple and obvious these transformations are, every time we do this by hand we are sidetracked from ading real value through innovation. Every time we do this by hand, we add some risk of defect to our software. Every time we do this by hand, we obfuscate the core expression of our intent behind the leaky abstraction.

Comments on “Scientists Announce Empirical Evidence for Greenspun's Tenth Rule:
Guy Steele's call/cc eval doesn't use the stack at all.
Here's some lively commentary on Sinclair's post. There's some excellent theory in there, as well as some enourmously valuable insights into why these conversions are non-trivial.
Just as a side note, I've made some changes to the article to provide a more appropriate context. It's title is now "How to Rewrite Standard Recursion through a State Stack & Iteration".
Did I miss something? This is first year CS stuff. Except we had to write the stack data structure as well using linked lists which were written by us in a previous lesson.

I remember doing a tail-optimized recursive loop when I wrote a scheduler in VB6 that emulated the Windows task scheduler. I don't remember what the algorithm was, but I tried to avoid using a recursive loop as much as possible. I made sure that the depth of the calculation was really small in the worst case and I had more "exit" conditions than recursive conditions. The calculation was the date and time of the next trigger of the scheduler. I calculated it up-front and then just did a less than or equal test for every Timer.Tick event. I called it the "better late than never" algorithm.

My point is that a one-size-fits-all recursive solution is unnecessary and ineffecitent. Algorithms can be optimized to fit the need of the calculation and the restrictions of the language.

I miss the days when I had to show why QuickSort was O(nlog(n)). I also miss the days when I had to know the proper definiteion of O(f(x)).
As far as I know GCC does tail call optimisation. So
int factorial(int n) {
return (n == 0) ? 1 : n*factorial(n-1);
should be optimised to the iterative version in C,C++ and Java.

I unfortunately don't have time to verify my bold claim, so don't bet the house on it before checking it yourself. :)
GCC does tail call optimisation

I have also heard this, and that is certainly how things ought to work!
This discussion of tail call optimization in .NET suggests that tail calls were badly broken at the time the author investigated them, but there was some evidence that tail call optimization was on the .NET to-do list.
And this article explains that Java is in no danger of implementing fast tail call optimization.
Seems like the article can now be found at:


<< Home
Reg Braithwaite

Recent Writing
Homoiconic Technical Writing / raganwald.posterous.com

What I‘ve Learned From Failure / Kestrels, Quirky Birds, and Hopeless Egocentricity

rewrite_rails / andand / unfold.rb / string_to_proc.rb / dsl_and_let.rb / comprehension.rb / lazy_lists.rb

IS-STRICTLY-EQUIVALENT-TO-A / Spaghetti-Western Coding / Golf is a good program spoiled / Programming conventions as signals / Not all functions should be object methods

The Not So Big Software Design / Writing programs for people to read / Why Why Functional Programming Matters Matters / But Y would I want to do a thing like this?

The single most important thing you must do to improve your programming career / The Naïve Approach to Hiring People / No Disrespect / Take control of your interview / Three tips for getting a job through a recruiter / My favourite interview question

Exception Handling in Software Development / What if powerful languages and idioms only work for small teams? / Bricks / Which theory fits the evidence? / Still failing, still learning / What I’ve learned from failure

The unary ampersand in Ruby / (1..100).inject(&:+) / The challenge of teaching yourself a programming language / The significance of the meta-circular interpreter / Block-Structured Javascript / Haskell, Ruby and Infinity / Closures and Higher-Order Functions

Why Apple is more expensive than Amazon / Why we are the biggest obstacles to our own growth / Is software the documentation of business process mistakes? / We have lost control of the apparatus / What I’ve Learned From Sales I, II, III

The Narcissism of Small Code Differences / Billy Martin’s Technique for Managing his Manager / Three stories about The Tao / Programming Language Stories / Why You Need a Degree to Work For BigCo

06/04 / 07/04 / 08/04 / 09/04 / 10/04 / 11/04 / 12/04 / 01/05 / 02/05 / 03/05 / 04/05 / 06/05 / 07/05 / 08/05 / 09/05 / 10/05 / 11/05 / 01/06 / 02/06 / 03/06 / 04/06 / 05/06 / 06/06 / 07/06 / 08/06 / 09/06 / 10/06 / 11/06 / 12/06 / 01/07 / 02/07 / 03/07 / 04/07 / 05/07 / 06/07 / 07/07 / 08/07 / 09/07 / 10/07 / 11/07 / 12/07 / 01/08 / 02/08 / 03/08 / 04/08 / 05/08 / 06/08 / 07/08 /