Would you like a side of referential transparency with your order of static typing?
Recapitulating some of the arguments
for and against static typing has been very refreshing. And thanks to everyone who took the time to share their point of view.
Leaving aside the argument that static typing helps your IDE help you, the really big idea behind modern static typing is that because certain properties of variables are invariant, it is tractable to perform a lot of analysis on a program looking for contradictions. For example, we say that
foo
is an
Integer
, and then a little later on we call
foo.append
. Since
Integers
don’t implement a method for appending, we know that there is an error in the program without having to run the program.
Thoſe who would give up Essential Liberty to purchaſe a little Temporary Safety, deſerve neither Liberty nor Safety.
And do you know what? Although I accept that this is true, and even useful, I haven’t personally been swayed by it (I’m just going to give my experience here, not a prescription or advice to others). The problem, as I see it, is that the statically typed languages I’ve used for production work have had such primitive typing systems that I couldn’t use them to
solve really important problems.
Errors where I mistakingly try to call
append
on a
String
when I should be calling it on a
StringBuffer
just don’t make up for all the extra verbiage and the onerous restrictions on meta-programming imposed by popular languages.
But I had a sudden “Oho, you’re busted!” moment a few days ago. Didn’t I write a nice post explaining why
mutable local variables are bad? The gist of my argument was… wait for it…
mutable things make it hard to move stuff around, because you don’t have those nice invariants to reason about. Hmmm. Could there be a strong parallel between getting rid of mutable variables and static typing?
Static typing and stateless programming
Yes, of course there is. It’s about minimizing the state changes. Static typing is about having just one state for each variable’s type. Programming with immutable variables is about having just one value for each variable. I advocate the latter. Why haven’t I embraced the former?
Well… I could argue that dynamic meta-programming is worth more to me than the benefits of static typing. They really are worth a lot more than the benefits of the simplistic typing systems you find in popular languages. But are they worth more than the powerful systems in languages like Haskell or ML? Maybe not.
And how dynamic is my meta-programming? I love the fact that I can use constructs like
acts_as_versioned
in Ruby, but there are languages that allow static meta-programming (like Scheme’s macros) that would go as far for much of what I do. Much farther than the restrictive straight-jacket of popular languages, anyways.
Paradigm smells
This brings me to
writing DSLs in Ruby. One of the reasons DSLs are incredibly useful is that they are
declarative: The what is cleanly separated from the how. Lots of successful DSLs are “business rules”, they aren’t statements to be executed, they’re constraints on the behaviour of a system. Just like static types are constraints.
Does this sounds familiar? I’m confessing that one of the reasons I like Ruby is that it’s easy to write things that are static, that don’t change state. But Ruby is all about having flexible things that change at runtime. This is what you might call a
paradigm smell, the paradigm of the language—types change on the fly—is at odds with the kind of programs I try to write in Ruby.
Isn’t that interesting?
p.s. Okay, all of you static typing fans who are rolling up your sleeves to write an “I told you so” comment: before you hit “publish,” ...
Are you curious about what would happen if you turned the static typing knob up to eleven? If you took the Red Pill? Could you use a really powerfully typed language to detect XSS vulnerabilities in the compiler? Could you switch from an impertitive, loop programming style to a functional style and get rid of mutable variables? Could you express your domain logic declaratively in a DSL instead of in procedures and methods? If a little compile-time analysis is good, how much better could a lot of compile-time analysis be?