Friday, June 08, 2007
  Still failing, still learning

(This essay appears in the e-book What I've Learned From Failure: A quarter-century of experience shipping software, distilled into fixnum bittersweet essays)

The good news: I’m still learning.
The bad news: …from failure.

This post officially announces that my side project (originally named cause & effect and later named certitude) is over.

What I wanted to achieve

For those of you who weren’t subjected to one of my enthusiastic rants, here was my Graham Question: Can we predict the outcome of a software development project with objective observation?

Although most businesspeople would soil their khakis if they had to ride a tank into action, Reis and Trout are right on when they compare the four major strategies of War—Offense, Defense, Flanking, and Guerrilla—to businesses, especially start ups. I rate this even higher than Crossing the Chasm for its ability to succinctly explain what growing companies have to do and how to do it to succeed.

After you’ve read what I have to say here about my failure, I invite you to read what Marketing Warfare has to say about Guerrilla Warfare (just three tactics to pursue!) and see if I'm a case in point.

I have always believed that the answer is Yes. And I manage projects that way. But I also strongly believe that the only way to prove that we have an objective understanding of an algorithm is to mechanize it, to write a piece of software that executes your algorithm.

So I set out to write a piece of software that, pure and simple, would look at a software development project and show you a traffic light: a green light would mean that the project looks like it’s on track, a yellow light would mean that the project needed help, and a red light would mean that there is no hope.

I won’t go into a lot more detail, mostly because this isn’t a story about teeming hordes of customers and me not being able to finish by deadline. It’s a story of inventing a great solution to a problem nobody cares about. But if you absolutely must visualize the application, think of something that gobbles up your MS project files, your bug tracking data, even your burn down spreadsheets, and belches out that traffic light when it’s done. That’s it.

So how and why did I fail?

Project management software is social software

Reason zero, just as Paul Graham warned, my age. Really. I remember how much code I could crank at age 22, and now that I am double the age, I write an order or magnitude less. I’d like to say that my code is that much better that it makes up for the lack of volume. That might be true if I start a project with a specific end in mind, but the act of experimenting, tinkering, and exploring benefits from being able to turn large amounts of code around in short amounts of time.

This is reason zero because I wanted to get it out of the way before explaining why I think I still would have failed if I were 22. But it’s still important to understand, because I might have known that I was going to fail two years ago, instead of today.

Reason one that I failed—and this is the most important reason I failed—is that I was trying to tackle a social problem with technology. This can work—ask any dating site zillionaire—but it is a very high-risk strategy for software. Not just for making money, but for what really matters, adoption.

Project management is a social problem.

Project management is a social problem. It is 99.5% about getting everyone who knows something about the state of the project to share what they know with everyone else. Getting all the relevant information is 99.5% of the problem, analyzing the information is 0.5% of the problem.

Joel likes to brag about how much trouble FogBugz goes to to make it easy to enter bugs, about how certain kinds of reports are not available to discourage punitive social behaviours like punishing developers who generate too many bugs. This is a strong hint that getting good information is all about managing people’s perception of the likelihood of punishment for telling the truth.

Sitting here typing this, I think the company who can do the best job of predicting the outcome of software development projects is Inkling Markets. That’s because their entire business is about finding a way for people to communicate what they really think of something, not just what they think other people want them to say about something. I think Todd Proebsting would agree.

This problem isn’t limited to dysfunctional environments where people cower in fear of saying anything except “Sir, Yes Sir” when told to change the laws of physics.

Lemons, damned lemons, it’s always lemons

Project management suffers from a real lemon problem. I quoted Bruce Schneier at length about this already, so this time I’ll explain things directly:

Most managers, especially those with limited experience shipping software on a predictable schedule, do not know how to correlate what they’re told about the project with the likelihood that the project will succeed.

Some information is valuable, some is junk. The problem is, managers “buy” information. They trade favours like letting you keep your job for information about how well you are doing your job. So there is a market for information just like a market for cars.

They also “sell” information, literally: they have to make a report or a presentation to their superiors, or to stake holders, or to their fellow founders at the YCombinator dinner.

When a manager cannot tell the difference between information that is useful for predicting the outcome of a project and information that is not useful for predicting the outcome of a project, she thinks about the next best thing: the “resale value” of the information with people one step removed from the project, like her own manager. So she values things like pretty PowerPoints about the architecture higher than finished pieces of functionality.

(This is why I have always sweated my heart out to give good presentations. My teams have depended on me being able to take good information and sell it upstream just as if it were CMM Level Five Buzzword-Compliant Junk).

Do managers further removed from a project always value pretty junk more than good, solid information? Not always, but often. And that’s enough for people to be pressured to give the bad information that sells to their manager, while hiding the good information that doesn’t sell. Exactly like the owners of good cars taking their treasures off the market.

Lemon and bay leaf crème brûlée

Why does junk information outsell good information? Nice PowerPoint isn’t a good explanation by itself: there are nice PowerPoints explaining Agile, but managers still prefer waterfall.

Consider a not so big design. Let’s call completing that design good information: we’ve done a good job finding out what’s really important for the project and making a design that emphasizes the way this project is unique, not the technology stack.

Now consider a typical technology design, emphasizing frameworks and technologies. Fully buzzword-compliant.

Which one sells better? The technology stack does. Why? Well, for starters, managers have been exposed to seventeen billion dollars worth of advertising talking about the benefits of technology stacks. Nobody is advertising the specific ways the not so big design helps the project. How could they? Those are specific to the project, that’s the whole point.

And managers are like anyone else, they compare what you are doing to successful projects they have seen in the past. Once again, the not so big design doesn’t have anything in common with other projects, but the technology stack does. (There are lots of failed projects with technology stacks, of course. But who cites those when bugging the team about whether they will use Hibernate as their ORM?)

How did this happen? How did things that have no correlation to the success of a project become more attractive than things that do?

Mmmmmm... Elegantly Easy Crème Brûlée... The title doesn’t lie, the recipes are easy: even I was able to make tasty desserts! For motivating a team or thanking your family for putting up with your devotion to your start up, dessert is always a good choice :-)

Quite simply, people have an incentive to look successful. So they imitate the outward appearances of successful projects. We have a really simple way of completing successful software projects: we put successful people on them. But we have a broken way of thinking about it: we don’t like to think of the people as being special, we think that what the people do is successful.

And by that logic, we can take anyone, have them do the same things as successful people, and our projects will succeed.

In a manager’s mind, the measure of whether information is good or not is, Does it measure whether people are doing the same things that successful people have done on projects I’ve been told were successful?

This is not the same thing as measuring whether the project is on its way to success at all. This measures the outward appearance of a project. Things that can be measured easily are rarely the most significant things. Behaviours that can be “gamed,” like how many hours a team is working, will be gamed.

And as above, even if a manger knows better, does her manager know better? If not, good information will be difficult to sell and she will be under a lot of pressure to serve Lemon Pie.

Off balance sheet transactions

There’s another important reason that projects have bad information: the best information is off balance sheet. That’s an expression meaning something a businessperson sweeps under the rug. Try Googling Off Balance Sheet Transaction. It’s never pretty.

In essence, project plans and reports never include the most important information about the likelihood of project success. Never. (I mean never in the same sense that Joel Spolsky means “nobody,” as in “fewer than 10,000,000 project plans”).

Let me give you a recent example. I was discussing a project with a certain manager in a client organization, and there was a rather linear series of releases. Her question was, Can’t we shorten the plan by looking at the dependencies and starting some releases in parallel with others?

Of course we could, but in doing so, we increased the project risk. Just as a single developer has a problem keeping multiple tasks in one brain, a team has the same problem: when working on several unrelated pieces of software at the same time, the individual developers may only work on one thing at a time, but the managers and the testers and especially the stake holders who are thinking about functionality and exercising change control are overloaded, so they will make poorer decisions.

And worse, although you might think that there are no dependencies, you only think that at the outset of a project because of assumptions you are making now, before you fully understand what you are getting into. All in all, there’s a reason Agile projects have a practice of working on iterations with single themes, and the reason is to reduce risk.

Does this mean that manager wasn’t making a reasonable trade off between risk and time? Maybe she was making a very reasonable choice. But let me ask you this: where on the plan would you see the risk component of the decision? If you were comparing Plan “A,” with the linear releases that finish in August, and Plan “B” with some parallel releases finishing in July, how do you decide which plan is better?

Right, you can’t see anything except the difference in dates, so you choose Plan B. The risk component of the plan is off balance sheet. Enjoy your lemon.

(And it is very hard for a manager’s people to explain, for the twentieth time, that it is a mistake to try to optimize a project by having everybody working at once, because that crystallizes certain architecture decisions too early, and haven’t you read any Eliyau Goldratt, whose novel The Goal explains what’s wrong with optimizing resource engagement when you should be managing project risk? The incentive is for them keep their own counsel and put their résumés up on the web. Such is life.)

A more striking example comes from another project where, for various reasons, there was nearly 100% turnover of the senior staff, and finally an outside firm was brought in to clean it up. Do you think there was a notation on the plan for staff turnover? That has to have a huge risk implication, but the plan where you use the same team from start to finish looks exactly like the plan where new people are brought in for each phase or iteration.

Why is risk off balance sheet? I think it’s because people have a blind spot for risk in projects. After the fact of a project, you can always do a postmortem and say, “if we had done this, and this, and if we didn’t do that, we would have succeeded.” So you blame yourself for poor execution.

There’s a culture of Boolean thinking about projects and plans. They worked or they didn’t. We’ll be done in July or August. Not “With Plan A, we’re 90% August, 8% September” and “With Plan B, we’re 12% July, 56% August, but 23% September and 7% October!”

Decisions that add risk to a project, such as excessive parallelization, or the use of unproved people, or the use of team who have never worked together in the past, or of forcing decisions prematurely, all of these things are not reflected in the plan.

(This is another reason there is pressure downwards on developer competence in many organizations. Consider two managers: the first staffs up quickly by selecting available developers, who may not be particularly good. In fact, they are less good then the existing team average. The second moves more cautiously, only adding to the team when the addition improves the average competency of the team. Do you think the second manager will keep their job long enough to finish a project? No, because the ticking time bomb of hiring certain types of developers is off balance sheet, but being understaffed is out there for everyone to criticize.)

I had this hope that through a kind of collaborative filtering I could have a piece of software notice that linear plans or plans involving hiring too quickly or whatever had less variance than parallel plans, and adjust the traffic light accordingly. Regardless of what people would say, it would shrug its shoulders and remorselessly remark on the actual historical averages.

Nice idea? No, it’s a dud. (Or at least, my execution was a dud!)

Somebody set up us the bomb

The second major reason I bombed is that I couldn’t find any buyers. Quite simply, the people who understand these principles are running successful projects. I know lots of people who agree that there is a problem and agree this would help, but they don’t think they need help.

I couldn’t find anyone holding their head in their hands, saying, “If only I could get the truth about our projects, I could open my budget and shower you with gold.” The people who were aware of the problems with project information were busy using decidedly low tech tools—like hiring good people and having daily meetings and estimating tasks in spreadsheets—to solve their problems.

And the people who could use some help quantifying the consequences of their choices—like managers who insist on calcifying decisions very early in a telephone book design document so that they can demonstrate progress in the form of an architecture—they don’t think there’s anything wrong with their approach.

Why is that? My conjecture is that people want help with things that fit their model of what’s important. Someone who uses skilled practitioners and XP thinks that limiting risk is important: that’s why they use two week iterations and merciless refactoring rather than up-front design.

Someone using a classical BDUF management approach thinks maximizing resource allocation and managing task dependencies is important, that’s why they spend all of their time looking at GANNT and PERT charts. When you try to sell either of them something, they want to know how it will fit the model they have in their head about how to succeed with software development, not why they should consider a new model.

And I’m not sure they’re wrong about what’s important to them, personally.

If one of those BDUF projects fails due to the architecture being a poor fit for what the team discovers is really important about the projects, or from poor decisions being made in haste at the beginning of the project, well… managers will say that the problem was with the people making poor decisions. Such managers are not shopping for tools to help them understand the trade-offs, because they do not believe they are making trade-offs.

Critical Chain is an amazing book. The narrative form—a novella detailing a technical project team and their search for a way to manage an uncertain process—is a big win, it highlights the important ways that Critical Chain Project Management handles risks and uncertainty and makes it visible where everyone can manage it.

The section on estimating tasks alone is priceless. If you can’t afford a copy and your library doesn’t stock it, borrow mine. You must read this book if you participate in software development teams.

Something I learned from selling Macintoshes back in the day is this: only make one sale. Convincing someone they have a problem is one sale. Convincing them you have the solution is another. And convincing them that today is the day to act is a third. If you have to do all three at the same time, you are doomed.

This is why experienced companies distinguish sales from marketing. The first two steps are marketing, the third is selling. When you are a new company, you don’t have the resources to market and sell. You have to work with an established pain point (eliminating the first hurdle), then use PR and limited marketing funds to get the word out that you have solved the problem (the second hurdle). You only have time and energy for the third sale, separating customers from their money.

But trying to convince managers that they’re doing it wrong (tactfully), then convince them that they want your product rather than some big rigid waterfall thing or three by five cards and XP, and then convince them that they should write a cheque today…

Bad idea. I should know, I discovered that I was trying to do exactly that.

Now you understand why I have used so much of this post to rant about the state of project management in software development. When you understand what most companies are doing and why, you understand what will sell, and why. And when I understood that the things my project were measuring and reporting were of little interest to the people who were my market, well...

The not so big business plan

I think there are lots of things that will sell, that do sell into this market. When you understand that this is a social problem, when you understand that most information is bad and that the incentives are to value bad information, and for managers to value activities that produce bad information over activities that produce good information, you can make something people will buy.

Like books, lectures, methodologies, and video training. If people have a social problem, a social solution is a natural fit.

There were interesting possibilities like selling this to BigCo for “process improvement.” But whenever I thought about such ideas, I noticed that the software wasn’t in there. I could have written a book and lectured on these ideas. I could have done a “search and replace” of agile for certitude.

A good business plan is one that is really specifically about you and your software. If your plan could be executed by someone else, or with a different project, it simply isn’t the right plan. And all variations of turning this into ConsultingWare were excellent ways of monetizing Reg Braithwaite the brand, but not ways of spreading the adoption of certitude, the product.

And so… and so I looked the sunken cost fallacy straight in they eye and said, no more. As much as I hate to lose, I have folded my project and I am sharing with you the important lessons I learned.

Lessons learned

Well, the good one is this: Paul Graham is right. If you phrase your venture as a question, you will walk away with something very valuable. I now know a lot more about project management and the culture of software development than I did when I started the project, and I wasn’t exactly a Spring Chicken. And set out to learn about Bayesian Networks and Supervised Classification, but I ended up learning about Unsupervised Clustering and Collaborative Filtering.

What a journey.

And there is the one I ignored for far too long: Always Be Selling. I can thank friends like Leila Boujnane, Sutha Kamal, and Erich Finkelstein for reminding me about this, incessantly. Asking people to “buy” your idea forces you to make something people want. That being said, I think there’s a right amount of “push.” You can’t invent by polling the market. Quickly now, jump in your time machine and go back to 1981 or so. How many people wanted a mouse for their computer? But yet… Always be selling!

(There’s a bunch of other stuff that is far more personal, and maybe some of it will appear one day in public, but I wanted to keep this post to essay length, so it’s almost entirely about products and markets.)

So, here I am. Older, wiser, and ready for life’s next adventure. It might be more consulting for BigCo. It might be joining a start up and helping someone else’s dream flower. I don’t know, but if you have an idea...

My brain is open.

—Pál Erdös

Labels: , ,


Comments on “Still failing, still learning:
Reg, I'm sure I fairly represent your readers/friends in saying:

1. Sorry that the project didn't work out.
2. Congratulations, however, on the lessons learned.
3. Thank you so much for unreservedly sharing this experience.

It's an exceptional blogger/person who takes weeks, months or even years of their lives and takes the time to compress the experience so that folks like me can benefit.

I know your next venture will be that much better!
I agree. Thanks for sharing your experience. (And in such a thought-provoking post, too.)

I also have a side project which, it appears, is not going to work out as I had hoped. I must blog about it one day...
I got terrible news for you, Reg: predicting the completion date of software projects is a solved problem.

It's in The Wisdom Of Crowds. You set up a prediction market. That's all there is to it.

It's one of those little bits of information which will remain an edge for anybody who uses it for a long time - possibly forever - because cultural resistance to the idea is huge and the actual implementation difficulty is tiny.


This is not news to me, which is why I suggested that Inkling Markets have the best shot at it.

That being said...

You still need a crowd.

Software Development is carried out in relative secrecy. A big company like HP or MSFT can set up a decent market.

But how does your average company do it? There are a lot of detail to be worked out so that a sufficiently large market can wager on a medium or small project.

And back to the social problem. What makes you think most companies will risk having a crowd betting that they will fail?
That sucks!

On the other hand, that's one gem of a post, thanks for the valuable lesson.

At least this lemon turned into a good vintage Cabernet.
Interesting post but I think it avoids the stating what I'm starting to see as obvious.

I'm a profession project manager with BigCo and schooled expensively in most any of the methods you can name.

All of these methods assist in providing a framework for success : indeed, just my using formal methodology, you're mitigating the risk of poor administration, quality, communication and so on.

But they don't cover everything, and one line ....just one ....screamed out at me as I scanned back over your post

"100% churn of the Senior Management"

Was that in the risk plan ? 100% churn in significant stakeholders ??

It wouldn't have been in mine....and even if it were, how can I influence the chances of something like that happenening ? Have a People guy chat to them about career plan on a regular basis ? and to their bosses to make sure the have no plans to fire them ??

So, my point then is that any undertaking of significance will always face the unknown or, more accurately - the unknowable.

What ever the size and quanity of know risks you can place- you can double that for risks you can't.
"100% churn of the Senior Management"

Was that in the risk plan ? 100% churn in significant stakeholders ??

I think you misunderstood, I said senior staff, which in that particular case meant the people doing the work.

Now, you asked, is that in the plan? And if so, what can you do about it?

This was not my goal. I wanted to measure the likelihood of success, not prescribe solutions.

And many project plans put the names of people next to the tasks to be done or outcomes to be generated.

An experienced person would note that the names were changing over the history of the project and raise a red flag.

However, most plans themselves have no place for actually noting the risk, for saying "Plan A, where we use one team from start to finish, has lower risk than Plan B where we have churn."

That's all I was pointing out. An experienced person would easily note this from the churn, but it is "off balance sheet," it is not formalized or noted.

Now as for what you can do about it, I can offer this suggestion:

People are less likely to drop out of projects that have realistic plans :-)
Thanks for the interesting insights. A question: did you work alone, or did you have a co-founder? I've found that starting alone is one of the main predictors for failure.
Your comment about "PERT charts" is actually quite a cogent one. No one I've seen managing software in the last three decades actually uses PERT charts. They use CPM charts, and just call them PERT charts. The key difference is that a task in a PERT charts is marked not just with its expected duration, but with estimates of its minimum and maximum likely duration. One can use this information to predict a probability distribution for project completion times.

Oddly, this probability distribution is exactly what you call for in your excellent post, and we have had a tool to produce it since the dawn of software engineering. The fact that nobody actually uses the tool is a good indicator that your analysis is right: nobody cares. CPM charts are for justifying schedules, not for making them right, so why use a fancier methodology that makes it harder to sell lemons?

You're absolutely correct, estimation variances are ignored by most practitioners. CCPM also works with variances, although it handles them a different way (with buffers).

If people used variances honestly, you would solve some of the problems. For example, the plan where we have a lot of parallelization ought to have larger variances than the plan where we are linear.

And there's another factor I left out (I know, it's hard to believe form the length that I actually trimmed material...): In software, many of our choices have a risk impact on acceptance/bugs/quality that is also off-balance sheet.

Hiring a newbie vs. an experienced person might make no difference to the duration of a task, but it might have a huge impact on the variance in the expected bug squashing later in the project.

I almost never see this in actual plans. What I see is, when the project runs over length (a red flag that there are problems), they shorten the QA phase to make up the difference!

Need I go into the consequences?
You mean shortening the QA period for a piece of functionality that's so complex that its development has overran its expected duration doesn't work? You'll never get your MBA with that kind of attitude.
The concept of a continuous integration-style dashboard for the project overall is pretty intriguing. That said, I think you nailed it with the social aspect - having ungainly software like Project or whatever that helps managers and their teams visualize all the balls that they've got up in the air is one thing, but if your manager doesn't already have enough information to generate some semblance of that green/yellow/red light in their head, then your project's in a bad way.
This is the best post you have written imo. Examination of (and learning from) failure is something almost totally ignored in software.

This post forced me to examine some of my pre conceptions and I learned a lot from it.
And back to the social problem. What makes you think most companies will risk having a crowd betting that they will fail?

Nothing makes me think that. I said in my first comment that I know most companies won't. That's why it'll remain an edge indefinitely; because there's resistance to the idea, despite its proven effectiveness.

Anyway, it's very true that prediction markets are only useful with a crowd. I didn't say I could name a useful implementation; just that a solution exists.
The current-you would be able to write/design software much better than you at 22.

In other words, you at 22 might write inexperienced code hence takes more time to get it right.

I'm not a believer of whiz-kid who can get it right the first time they do it. Not even those YCombinator companies.

Most whiz-kid can create a software, but the code quality is up for questions.
Can we predict the outcome of a software development project with objective observation?

I have always believed that the answer is Yes.

This paper has a persuasive theoretical argument (based on algorithmic complexity theory) that the answer is No.

If this is true, it's not surprising that you couldn't write a computer program to do it.

Thanks for the link. This is very timely, I read Meta Math a few weeks ago, and it was really provocative, on the level of reading Godel, Escher, Bach the first time (I especially loved his explanation, early in the book, about why some proofs are more important than others, establishing that merely knowing something is true is not the end of the journey, but perhaps just the beginning of another journey).
An ability to predict such outcome is like predicting your future. Gipsy witch will do it better.

But what you could measure is a pulse of a project. An idea is simple.

1. Measure amount of http requests to reddit/digg/slashdot per programmer. You'll know when he became bored with his task.

2. Measure amount of compiler runs per code size. You'll know when your programmer get stuck.

3-n. etc.

Sure, all this like a blood pressure - only difference from "norm" matters.

And it will never substute good project manager whom you could talk.

I think five minutes talk each morning and another one in evening could make a much better thing. If manager is good.
I think five minutes talk each morning and another one in evening could make a much better thing. If manager is good.

Aye, there's the rub. If manager is good. Otherwise, what you get is a bad project full of people imitating what good people do.

That's what makes the whole thing interesting for technology or for machines.

What is it, exactly, that tells us the difference between good projects and bad?

Or, are there no good projects, just good people?
Another great post, Reg.

I used to wrestle a ton with these questions when I was working at a small to medium sized company in a not so competitive tech hiring market (Phoenix). There, on my good days, I could definitely be considered a "rockstar" at many top companies.

Since moving to the Bay Area recently, it feels much tougher to achieve the same level of "recognition" as one might in a less competitive market.

Tech interviews are seeking the top .001% highest IQ types; that just isn't me. I'm more like top 2% on my good days =)

While Google, Microsoft, Facebook, etc have an ongoing War for Talent, I wonder if another company could develop a system for developers who most Get Stuff Done, ship code, etc. (as in Joel's "Smart & Gets Stuf Done" goal of hiring)

Looking at resumes only measures this trait so much. 2-minute academic coding problems usually measure IQ as opposed to brute coding that is more likely to be done in the wild.

Any ideas? =)

What is it, exactly, that tells us the difference between good projects and bad?

Perhaps there is no such things like good or bad projects. As you said there is projects in which people performs bad and others in which they are good.

And it's a manager's job to now how they perform and do a bit of magic to "light their fire" (tm).

To know how things are going he must be stuff's best friend. They should be able to say him things they will never tell even to their wifes. Like "i feel bad today so i am going to read reddit till evening". Or "i was writing this whole month. I have done it all wrong.". Or "I am stuck."
What is it, exactly, that tells us the difference between good projects and bad?

Sorry for my tendency to loose topic. :) I am after my recent paradigm shift shock.

Ok what we should know is will project ends with success (a good one) or it will fail (a bad one). At partcular point of time.

In my opinion software development is not about product-making and project finishing, but rather service-producing.

Like a barber we could never predict will customer get what he want or not. Even when he cut his own hairs.

Adoption of this idea leads me to several conclusions.

Having dead ends and final products is a mistake (industry-wide).

Have a product usable from very beginning. Launch it early. No, earlier. (You can not serve to customers if you have none of them).

Refine it constantly. (Produce service).

When you stop service is dead. Software are not working without humans who drive it.

It is mostly ideas of XP but they had theoretical ground behind them, not just a practice.
Hey Reg,

We spoke before, after the Joel On Software meeting. I was working on a "semantic engine". You suggested I incorporate . I'm now considering a couple applications -- social networking and synthetic biology. Do either of these, or semantic web technologies stickle your fancy?
Er, that should be "incorporate bayesian filtering"...
Thanks for a really interesting thoughtful post. As you say, project management is fundamentally a people problem, so very difficult to predict accurately in software.

My point however is that I don't see any evidence in your story why your age has anything to do with your venture not working out. On the contrary, understanding the pitfalls of project management is something that takes a bit of maturity and experience, so I think the "now-you" is likely to be better than the "old-you".

While I am also a big admirer of Paul Graham's writings, we shouldn't believe everything he says! Though, one of the other great Paul-isms "make something people want" seems to be at the root of the problem here.

Best of luck for whatever you turn to next!
"Reason zero, just as Paul Graham warned, my age. Really. I remember how much code I could crank at age 22, and now that I am double the age, I write an order or magnitude less."

Thanks for propogating a myth: I too am double 22, but rather than bemoan this fact, just this morning, in perhaps an hour or two, I cranked my third implementation of a web/mvc meta-framework I first conceptualized less than a year ago -- and I wasn't cranking ANY code at 22, let alone 33.

I am not responsible for what you did or didn't do with your youth when you had it. I report my experience here.

I am not bemoaning anything, but I am explaining that at 44, (45 today!) there are some things I don't do as well as I did at 22, and some things I do better.

For sheer volume of code, I was better at 22. Heck, for sheer ability to work without sleep, I was better at 22. Most people who have parented a newborn for the first time in their thirties or forties can vouch for the fact that your tolerance for sleep deprivation changes with age.

Am I writing better software today? Absolutely. I don't want to trade my experience and knowledge for the vigor of my youth. Volume isn't everything. In fact, I'm not sure that the volume scales with languages. So I agree that a 20-something hacker should be able to out-Java a 40-something hacker due to sheer volume. But can a 20-something Lisper out-hack a 40-something Lisper? Maybe, but I'll bet the difference is sharply reduced. The ratio between hours of coding and ideas is different.

That being said, there are many other factors at work in the difference between myself at 20-something and myself at 40-something, some of which I am not going to share with you. The bottom line is, I believe I would have shuttered this business idea a year or even two sooner were I 24.

I stand by what I said, and if you If you read what I said, you will realize that I never claimed that I would have succeeded with this business had I been able to crank out more code in less time.


Carry on with your meta-framework, and I wish you every success.
Reg, as you can see I am months behind in my reading... Nice post. It is time for beers; patio is ready!

Thanks as always for sharing your thoughts!

We are shipping a release in two days so believe me when I say I know what you are talking about! But since we are simple folks we decided that our management mantra was going to be simple: white board(s), great team, daily status meetings, and a notepad (sometimes we even use a spreadsheet!) - stickie notes are great too, specially the rainbow coloured ones! And tickets, tickets, tickets. Broken anything: open a ticket!

Let's have a drink soon! Once our release is shipped...

You might find the following interesting:


You seem to be right about there being no money with this, but we had great success using it nonetheless.
You are now the first result at google for still failing.

-- Rasmus

<< Home
Reg Braithwaite

Recent Writing
Homoiconic Technical Writing / raganwald.posterous.com

What I‘ve Learned From Failure / Kestrels, Quirky Birds, and Hopeless Egocentricity

rewrite_rails / andand / unfold.rb / string_to_proc.rb / dsl_and_let.rb / comprehension.rb / lazy_lists.rb

IS-STRICTLY-EQUIVALENT-TO-A / Spaghetti-Western Coding / Golf is a good program spoiled / Programming conventions as signals / Not all functions should be object methods

The Not So Big Software Design / Writing programs for people to read / Why Why Functional Programming Matters Matters / But Y would I want to do a thing like this?

The single most important thing you must do to improve your programming career / The Naïve Approach to Hiring People / No Disrespect / Take control of your interview / Three tips for getting a job through a recruiter / My favourite interview question

Exception Handling in Software Development / What if powerful languages and idioms only work for small teams? / Bricks / Which theory fits the evidence? / Still failing, still learning / What I’ve learned from failure

The unary ampersand in Ruby / (1..100).inject(&:+) / The challenge of teaching yourself a programming language / The significance of the meta-circular interpreter / Block-Structured Javascript / Haskell, Ruby and Infinity / Closures and Higher-Order Functions

Why Apple is more expensive than Amazon / Why we are the biggest obstacles to our own growth / Is software the documentation of business process mistakes? / We have lost control of the apparatus / What I’ve Learned From Sales I, II, III

The Narcissism of Small Code Differences / Billy Martin’s Technique for Managing his Manager / Three stories about The Tao / Programming Language Stories / Why You Need a Degree to Work For BigCo

06/04 / 07/04 / 08/04 / 09/04 / 10/04 / 11/04 / 12/04 / 01/05 / 02/05 / 03/05 / 04/05 / 06/05 / 07/05 / 08/05 / 09/05 / 10/05 / 11/05 / 01/06 / 02/06 / 03/06 / 04/06 / 05/06 / 06/06 / 07/06 / 08/06 / 09/06 / 10/06 / 11/06 / 12/06 / 01/07 / 02/07 / 03/07 / 04/07 / 05/07 / 06/07 / 07/07 / 08/07 / 09/07 / 10/07 / 11/07 / 12/07 / 01/08 / 02/08 / 03/08 / 04/08 / 05/08 / 06/08 / 07/08 /