Yeomanry and Exploding Brains
Note: I wrote this post very quickly last evening, in an almost stream-of-conciousness style. I had just endured some particularly frustrating client issues, and I really, really wanted to focus on a few bright moments that I had enjoyed that day. You know, trying to be a good dog.
After publishing it, I felt unsatisfied with the post. It didn’t feel like I was saying or thinking anything particularly new, just the same old, same old thing about stretching the mind and looking for the elusive “quality without a name” that lies behind well-written software.So I yanked it and went home. I was talking with a friend who writes for a living, and she quoted a great aphorism: “All writing is rewriting.” She suggested I rework it, not discard it. So here it is again, and I hope to rework it until I’m satisfied. The first major change will probably be to cut it in half.
We’ll see about the rest.
I finished reading
The Little MLer
this morning. This is a
wonderful book. It’s obviously valuable if you want to learn something about SML or Ocaml. But having read it, I’d argue that it’s a great introduction to strongly typed functional languages, period.
In fact, it’s probably the very best first book to read before learning Haskell if you haven’t already used a language that combines first-class functions and strong, static typing. If you read this first, you can concentrate on lazy evaluation and monads without having to grok currying and other first-class functional ideas at the same time.
Stars, Arrows, and EleganceTwo things jumped out at me this morning. The first was chapter eight, “Bows and Arrows.” This was an introduction to currying and partial function application. When I say it that way, it sounds rather dry. But it rolled along quite pleasantly. The material will be familiar to anyone with a passing familiarity with the idea of a
function returning a function. It has the same ideas you’ll find in
Structure and Interpretation of Computer Programs
, but delivered with static typing.
This alone is worth the price of the book: I came to the subject with an impoverished experience set: I had used latently typed languages like Scheme and Ruby, as well as static and declaratively typed languages like C++ and Java. Although I knew “intellectually” how languages like Ocaml and Haskell deliver some of the power of Scheme along with the benefits of compile-time type checking, there is no substitute for walking through the examples to spark the “Aha!” feeling. Incredibly, chapter eight got
better. It finished with a real bang:
The examples in the chapter constructed various functions that took multiple arguments, then constructed their equivalents by combining smaller functions with partial evaluation. Then it closed with
The Eighth Moral:
Replace stars by arrows to reduce the number of values consumed and to increase the generality of the function defined.
In other words, first-class functions let you break monolithic functions that take a lot of arguments into smaller components that can be combined and recombined. This is a deep idea in writing programs. We often see the tension between programs as collections of small pieces that fit together and black boxes that are configured by means of switches and dials on the outside.
This is a good tension: we call the black box approach “abstraction” and we call the combinatory approach “bottom-up programming.” Both produce programs that are expressed economically, but the black box does so by sweeping the complexity “under the carpet” and bottom-up programming does so by identifying and combining the simple constituent elements of a program in an economical way.
I think of black box abstractions like inventing new nouns for new ideas. A web server? We’ll call that a “Fizbuhlsweepnik.” A REST server? We’ll call that a “Harkonnensmelt.” Whereas I think of bottom-up programming as inventing smaller nouns, verbs, adjectives, and adverbs, so we can say a “web server” or a “REST server” or even a “Caching REST server.” Being able to combine the small words gives us more to say with much less effort. (I am not a linguist, so this argument is suspect!)
Giles Bowkett nailed this idea when
he wondered whether languages like Scheme are actually easier to learn than languages like Java.
1 This makes a lot of sense from the perspective of simplicity. Pure Scheme is based on just five axiomatic “special forms” plus macros to build almost everything else out of the five axioms. That’s a lot easier to learn than a language that tries to provide the same power with a plethora of knobs and switches like
package
,
static
, and
main
.
This tells us something about ecosystems versus languages. Lately, quite reasonable people have wondered aloud whether the ecosystem of frameworks and libraries are more important than the languages themselves. As we’re beginning to see with multiple languages running on common VMs, the point may become moot very shortly. But for now, it is easy to see that libraries tend to look like black boxes. When they do exactly what you need, they save time and trouble. But they tend to be wide and shallow, just like Java. They have a lot of knobs and switches.
Good languages, OTOH, give you power that can be combined and recombined like first-class functions. They let you tame the black boxes by wrapping up the parts you need and combining them in simple ways. With currying, a function with twenty arguments can be transformed into a function with one argument. With powerful languages, a baroque library can be deftly specialized to fit your program.
My Brain ExplodesIn the same chapter, the authors talked about extensional equivalence.
2 This is the idea of two functions that can have the exact same behaviour as each other, even though they are defined in different ways. As the authors point out, proving that two functions are extensionally equivalent is impossible in the general case.
That shouldn’t stop anyone with a taste for reading
esoteric programming weblogs.
3 Something was tickling the back of my mind about this problem. Yes, it had to do with
rewinding into the past to correct mistakes, something like backtracking in a reasoning system.
So perhaps we want an equivalence function that takes two functions “p” and “q” and returns a Boolean. Both functions must have the same signature, obviously. How can we write such a function? Here’s one approach: Scan the history of the program for calls to
p
. For each call, call
q
and compare the results. If they differ, return
false
. Do the same for each call to
q
, comparing its result to the result of calling
p
with the same arguments.
If there are no differences, return
true
. But wait, maybe your sample size is insufficient. No problem, continue forward in the program, and every time
p
or
q
are called in the future, compare the results from both functions again. If they are the same, continue on. But if they differ,
rewind back to when the equivalence function was called and return false.
That does not guarantee equivalence, of course, they may differ for some set of inputs that you never tested in your program. But if they are equivalent for the set of inputs you actually run, what do you care if they differ for inputs outside of the “event horizon” of your program?
Now quite honestly, I cannot think of a practical use for this “rewinding extensional equivalence predicate.” I mention it only because the thought captures for me one of the benefits of reading certain kinds of books: they provoke you into thinking about new things in new ways. They may be useful immediately, or they may pop up down the road. Or maybe they were just a passing thought that amounted to nothing of grave importance.
But overall, my experience is that people and writing that pushes you into unfamiliar thinking territory is good for you. And in that spirit, I recommend
The Little MLer
4 with as much enthusiasm as I recommend
writing your own version of the Y Combinator.
- See also: Giles Bowkett turns language elitism upside down
- Equivalence has been on my mind recently!
- And you wonder why I have been doing this since the seventies but somehow managed to miss out on earning a Turing Award while utterly failing to amass a small private fortune?
- Judging by the click-throughs on my blog in the last year, there are plenty of people who can’t quite bring themselves to pick up a book about ML. Don’t fret, the authors have also written The Seasoned Schemer
for your first-class functional pleasure. In an effort to expand my own mind, I will be starting their book about logic programming, The Reasoned Schemer
tomorrow.