raganwald
(This is a snapshot of my old weblog. New posts and selected republished essays can be found at raganwald.com.)

Monday, January 07, 2008
  A programming language cannot be better without being unintuitive


Optimising your notation to not confuse people in the first 10 minutes of seeing it but to hinder readability ever after is a really bad mistake.
—David MacIver, via In Defence of (0/:l)(_+_) in Scala

Or as Jef Raskin put it:

The impression that the phrase “this interface feature is intuitive” leaves is that the interface works the way the user does, that normal human “intuition” suffices to use it, that neither training nor rational thought is necessary, and that it will feel “natural.” We are said to “intuit” a concept when we seem to suddenly understand it without any apparent effort or previous exposure to the idea.

It is clear that a user interface feature is “intuitive” insofar as it resembles or is identical to something the user has already learned. In short, “intuitive” in this context is an almost exact synonym of “familiar.”

The term “intuitive” is associated with approval when applied to an interface, but this association and the magazines’ rating systems raise the issue of the tension between improvement and familiarity. As an interface designer I am often asked to design a “better” interface to some product. Usually one can be designed such that, in terms of learning time, eventual speed of operation (productivity), decreased error rates, and ease of implementation it is superior to competing or the client’s own products. Even where my proposals are seen as significant improvements, they are often rejected nonetheless on the grounds that they are not intuitive.

It is a classic “catch 22.” The client wants something that is significantly superior to the competition. But if superior, it cannot be the same, so it must be different (typically the greater the improvement, the greater the difference). Therefore it cannot be intuitive, that is, familiar. What the client usually wants is an interface with at most marginal differences that, somehow, makes a major improvement. This can be achieved only on the rare occasions where the original interface has some major flaw that is remedied by a minor fix.
—Jef Raskin, Intuitive Equals Familiar


Complaining that a programming language “Violates the Principle of Least Surprise” is a complaint that it fails to appeal at first glance, without any training or investigation. But if a new language is “intuitive,” it can only offer you a marginal improvement over the languages you already know well.

You needn’t look at functional langauges like Scala to see this at work. Is Ruby’s for loop an improvement over Java? By how much? Ruby’s big win over Java in that regard is the ease with which you can use Enumerable’s collect, select, detect, and inject methods. Which, of course, are not familiar to the programmer with a grounding in for loops. They require study to understand. But once understood, they make code easier to read thereafter.



post scriptum:

As Justin points out in the comments, the real meaning of the “Principle of Least Surprise” is that languages should be consistent. The principle applies to how the language behaves when you already know something about it. So if you see (_+_), the questions is whether this makes sense if you already know that underscore is a function parameter.

And more: The word "intuitive"... isn't.

And even more, from Matz in Japanese: http://www.rubyist.net/~matz/20080108.html#p02
 

Comments on “A programming language cannot be better without being unintuitive:
> not “intuitive” to the programmer

It would be so nice to get beyond the scare quotes:

not familiar to the programmer
 
Isaac, how nice to hear from you. As you command, so shall it be done.
 
I am under the impression that the "Principle of Least Surprise" has been taken to mean three completely different things
1) Least surprise to a random person. This is what you seem to be criticizing.
2) Least surprise to the language designer. That is, a language that works roughly how the designer envisages, without flow on effects that surprise the language designer. This is the stated design philosophy of a number of languages designed initially for their designers' use.
3) Least surprise in an information theoretic sense; that is, the conditionalised probability on the portion of language known of the way something has been done should decrease as the portion of the language known increases, and should have a low upper bound after the portion of language known is above some epsilon. This is the principle that I take to be in play, which means that consistency (in an aesthetic sense; that is, where inductive reasoning about the system will eventually supervene upon deductive reasoning) is a priori connected to least surprise, in that they should (if we live in a universe that makes sense, which is not something that I feel we can proveably prove) entail each other.
 
Justin:

I definitely criticize the use of definition 1. Well, I don't criticize it so much as underscore Raskin's point that you are limiting its upside.

It might be a valid design decision in a case where you are shooting for popularity amongst people who do not have the time or inclination to look beyond the surface.

As long as we don't claim that familiarity for the random person is equivalent to long-term benefit, I'm a happy guy.

The other definitions are much more interesting, they speak to feelings of consistency or stability.
 
reginald braithwaite: (3) speaks to the Kantian in me. Just to further elaborate perhaps unnecessarily, what (3) above says is that, give a subset of the programming language greater than epsilon, we can reason about the language inductively with a reasonable likelyhood of being correct; that is, we can expect the language to behave how we think it should, and we will not often be wrong. This does not supervene upon the language being familiar, but on the language being consistent. The conflation comes in as people may incorrectly include their knowledge of other programming languages into their knowledge of the programming language of interest, meaning that anything that is different will be inconsistent, and thus infinitely surprising.
 
I think your point applies more to semantics and not syntax.

Just because %$%EW#$#-W##@(%) is not familiar doesn't mean its necessarily better or more powerful than the syntactically longer version.

The underlying semantics and idioms should be markedly different, which is why your example with Ruby functional iterators is a good one: its a library which meshes with the language features.

Great blog, as usual.
Cheers.
 
"It is clear that a user interface feature is “intuitive” insofar as it resembles or is identical to something the user has already learned."

So we say that the UI has an affordance.
 
What would the solution be anyway? Remove the /: shorthand?

Too many languages these days try to protect the programmer from himself. From the forced exception handling of java to the idiotic forced indentation of python.

It's like removing words from English in order to remove ambiguity. Double plus ungood.
 
"A programming language cannot be better without being unintuitive"

This statement applies to all programming languages, so we should apply it to Lisp. In what unintuitive way should Lisp be improved? What other language beats Lisp (in one area) thanks to a greater willingness to have unintuitive features?

BTW I actually think you're essentially right, and these questions have answers, because intuitiveness today (a function of our existing ideas) is not a reliable guide to what is good.
 
Elliot:

One of the dangers in arguing about Lisp is the temptation to slip into the Turing Tar Pit where the Lisper claims "You can do that in Lisp by creating a new language based on Lisp."

Lisp is a chameleon!

But I can provide an example for the sake of contemplation: Lazy evaluation (which began in certain Lisp dialects and can be implemented easily in Lisp) provides an improvement in certain FP when elevated to a built-in feature of the language.

This is not to suggest that Haskell is overall superior to Common Lisp, but for certain things, I believe it is an improvement.
 
Personally, I don't find (0/:l)(_+_) intuitive at all. I think it's a prime example for badly written unreadable code.

It's no fault of Scala though, the first time I saw it, I also read it as colon-one instead of colon-ell, and even the ell doesn't say much.

In expanded form, (0/:list)(_+_), it starts to make sense, for one knowing that the 0 is applied to a list, and the function applied to that.

If badly written examples are indication for languages being non-intuitive, then languages with substantial code bases (*cough*Java*cough*) are the worst offenders.

And while we're at it, here's my least-intuitive contribution:

_="_=%p;$><<_%%_";$><<_%_
 
I think you may be setting up a strawman here. You can define "intuitive" to mean familiar and then argue it into the ground, I agree, though I rarely hear anyone who I consider reasonable use the term that way.

A more useful definition for "intuitive" is "not easily expressed linguistically." Given this definition, we want user interfaces to be intuitive (in fact, this is the entire reason to have GUIs). A programming language is a user interface. Therefore, we want programming languages to be intuitive.

For example, why do we prefer "foreach" over "for?" It's hard to explain, but we do. It's "intuitively" better. Of course, I can explain why it is better - it more directly corresponds to the concept of iterating over a list, is less verbose, it is unambiguous, it is readable, etc. Then again, I think about programming languages a lot. Your "usual" programmer might not know why they like it better, they just do. They would call it "intuitively better."

In conclusion - "intuitive" does not mean "familiar" except to the layman. Familiar bad, intuitive good.
 
I disagree. Intuitive designs are not necessarily familiar: they're discoverable, guessable.

Dehora used the right term - an intuitive system is one that provides "affordances": visual hints that evoke conventions the user already knows, letting him guess how something will behave without actually trying it.

(For example, a button is intuitive: you press it. A button-like object that needed to be dragged would be unintuitive.)

Now, of course, you're right to some extent - for someone to "get" such a hint, they need to be familiar with the convention it's trying to evoke.

Basically, a system can be unfamiliar but intuitive, if it presents familiar _components_ in a new, but discoverable, way.
 
Echoing Dehora and laurie:

"What Makes a Design Seem 'Intuitive'?" by Jared M. Spool
http://www.uie.com/articles/design_intuitive/

He argues (convincingly) that intuitive means either:

* familiar
* or able to be learned effortlessly

or to be exact:

"Condition #1:
Both the current knowledge point and the target knowledge point are identical. When the user walks up to the design, they know everything they need to operate it and complete their objective.

Condition #2:
The current knowledge point and the target knowledge point are separate, but the user is completely unaware the design is helping them bridge the gap. The user is being trained, but in a way that seems natural."

In this scenario a well designed language can be more different from the status quo, and therefore more "better", while still not scaring people off than one that ignores this principle.
 
I always found "violating the principal of least surprise" to be very important BUT within the context of the current language, not within the context of all languages. Look at Java, for example, it's riddled with inconsistencies. Sometimes it's .size() and sometimes it's .length() and sometimes it's .length, and don't even get me started about the damn Calendar class.

Different classes performing the same conceptual operation within a language should use the same syntax to do it, thus not "surprising" you every time and making you constantly have to check the docs (or rely on the crutch of your IDE) for the less frequently used classes.

This familiarity is an incredible feature and doesn't hamper innovation in any way. You're free to add new things that act in new ways, but if you're doing the same thing then don't surprise me with a different API for it.
 
You forgot to mention that sometimes it's getLength()...
 
Just to nitpick a bit, it is possible to be radically different and yet more familiar/intuitive. The original mac was more familiar/intuitive because it used metaphors from a completely different domain that the user was already familiar with. Similarly, languages with an algebraic syntax are more familiar/intuitive to those who have been taught that syntax in school.
 
Paul:

The debate about the Macintosh design is an interesting one, and I have a bottle of Single Malt ready so that we can sit down and draw it out enjoyably.

There are many factors in play. yes, they tried to use familiar objects as metaphors. But as Raskin explains in his article, many things were extremely unfamiliar, such as the mouse.

And a huge part of its success was actually the consistency across applications, which speaks to what some of the other people are saying here.

As I say in the follow-up, I really believe that intuitive equals someone's subjective impression of good design, it is the consequence of design choices, not a choice itself.

And that subjective impression is strongly driven by familiarity.
 
Bill and Laurie are right ... discoverable & guessable == intuitive.
 
Intuition comes from being able to make decisions without conscious processing, which relies on the brain's pattern matching ability, and those patterns have to be learned. Something may appear intuitive at first if you learned the patterns, or they were easy to grasp. When you look at a mouse, it's not intuitive how it would be used: do you wave it in the air? point it at the screen? But once you know, the pattern is easy and immediate that it quickly becomes intuitive.

Familiarity is not sufficient for something to be intuitive. It can be familiar yet with no clear patterns emerging, take an incredible amount of time to become intuitive. And intuitive can be wrong, sometimes deadly wrong. Our intuition tells us, when we approach a turn too fast to step on the breaks, which matches our experience in most driving situations. Not so with a bike, slowing down forces a wider turn, if you're approaching too fast -- step on the gas.
 




<< Home
Reg Braithwaite


Recent Writing
Homoiconic Technical Writing / raganwald.posterous.com

Books
What I‘ve Learned From Failure / Kestrels, Quirky Birds, and Hopeless Egocentricity

Share
rewrite_rails / andand / unfold.rb / string_to_proc.rb / dsl_and_let.rb / comprehension.rb / lazy_lists.rb

Beauty
IS-STRICTLY-EQUIVALENT-TO-A / Spaghetti-Western Coding / Golf is a good program spoiled / Programming conventions as signals / Not all functions should be object methods

The Not So Big Software Design / Writing programs for people to read / Why Why Functional Programming Matters Matters / But Y would I want to do a thing like this?

Work
The single most important thing you must do to improve your programming career / The Naïve Approach to Hiring People / No Disrespect / Take control of your interview / Three tips for getting a job through a recruiter / My favourite interview question

Management
Exception Handling in Software Development / What if powerful languages and idioms only work for small teams? / Bricks / Which theory fits the evidence? / Still failing, still learning / What I’ve learned from failure

Notation
The unary ampersand in Ruby / (1..100).inject(&:+) / The challenge of teaching yourself a programming language / The significance of the meta-circular interpreter / Block-Structured Javascript / Haskell, Ruby and Infinity / Closures and Higher-Order Functions

Opinion
Why Apple is more expensive than Amazon / Why we are the biggest obstacles to our own growth / Is software the documentation of business process mistakes? / We have lost control of the apparatus / What I’ve Learned From Sales I, II, III

Whimsey
The Narcissism of Small Code Differences / Billy Martin’s Technique for Managing his Manager / Three stories about The Tao / Programming Language Stories / Why You Need a Degree to Work For BigCo

History
06/04 / 07/04 / 08/04 / 09/04 / 10/04 / 11/04 / 12/04 / 01/05 / 02/05 / 03/05 / 04/05 / 06/05 / 07/05 / 08/05 / 09/05 / 10/05 / 11/05 / 01/06 / 02/06 / 03/06 / 04/06 / 05/06 / 06/06 / 07/06 / 08/06 / 09/06 / 10/06 / 11/06 / 12/06 / 01/07 / 02/07 / 03/07 / 04/07 / 05/07 / 06/07 / 07/07 / 08/07 / 09/07 / 10/07 / 11/07 / 12/07 / 01/08 / 02/08 / 03/08 / 04/08 / 05/08 / 06/08 / 07/08 /