The Case for Cowboy Coding

It seems like a desirable thing to be able to look at work that you’ve done
over the years and say:

“That stood the test of time, I wouldn’t change a thing.”

And so, enthusiastically, I set forth to craft my project. Infusing it with
creativity and passion, only to have the wolrd assert itself:

Hey, I need this done this week.

So the one week version was what was written, and we know the heart of something
great is there, but in an honest moment I look at that projects I’ve completed and say:

I’m not sure I’d do that again.

Or worse:

That was a waste of time.

Why is this? What can be done to correct this? Is something going wrong in my
preparation, my craft, my follow-through? So with years of projects to look back
on, I’m faced with two archetypes of myself of which I have conflicting
feelings. What approach can I take to deliever value?

The Samurai Developer. The version of myself who prepares well, is
considered in their approach and work. A well-executed plan is the highest prize,
and large refactoring and abandonment the highest price.

“Measure twice, cut once.”, or “A sitch in time saves nine.”

Or the Cowboy Coder, who implements quickly, with minimal thought for the
long-term vision, and is comfortable with ambiguity. Who loves getting the job
done, and testing the validity of requirements with a working prototype.

“Perfect is the enemy of progress.”, or “A bird in the hand is worth two in
the bush.”

These two mindsets I’ve used in many projects. Let’s review a few:

The Generic Feature

This is a feature that will be used by many other downstream efforts. It has a
clear vision, and detailed requirements.

The Samurai will approach this with gravitas. I probe the requirements
for unforeseen roadblocks, and I’m willing to push the start date until the work
is truly ready.

I consider the data model and logical abstractions that will be needed. I
diligently implement and test The Generic Feature and do a good job.

The Cowboy delivers the feature ahead of schedule. It may be a bit more
austere, but it will be delivered.

Which I choose depends on this equation I subconsciouly go through for each
task:

Value I Deliver = Benefit of the Task 
                - Implementation Cost
                - (Refactor Consequence × Risk)

So the Benefit of the Task is reason why we’re doing the task. Some things
are really worthwhile, while others are certainly worhtless.

The Implementation Cost is the cost of waiting for the feature to be
delivered. My empoloyer is paying me to do the work, and other engineers and users
are waiting for that work to be delivered. In almost all cases, fewer days is
better.

The Refactor Consequence is the cost of refactoring the code. What would it
cost us if we have to refactor this code? Will we lose that client? Will we delay
a release, or am I the only person on earth that will know about this feature?
We’ll come back to this.

Risk, quantifies the chance that we will need to refactor this. Some code
will never be touched again. Some code will be rewritten ten times as it
becomes the hot path for your application.

Okay, so we’ve got some levers, and we solve for while maximizing value. Let’s
go to another example.

The Pacemaker

This project is make it or break it. Any failure mode has disastrous consequences
eclipsing the value of the whole project, let alone the feature. So if we plug
this into our equation:

Value I Deliver = Benefit of the Task 
                - Implementation Cost
                - (Company Ending Event × Risk)

In the case were failure is catastrophic, we must do everything we can to
reduce the risk. Predicting this let’s us know if what it would take to implement
is actually worth it.

So the methodical Samurai will win out.

The Idea

Your company has definitely had an idea or two. One or two of them are why this
whole project is happening. These might be untapped treasures, or these might
be a collossal ego project for someone signing the checks. How do should you
approach this?

Value I Deliver = Benefit of the Task # Nearly zero as this is a lottery ticket.
- Implementation Cost
- (Refactor Consequence × Risk)

In this case the only variable we really control is the cost of implementation.
Its value is arbitrary. So do we build a featured implementation that will
succeed for any future, or do we build a prototype to validate the idea, knowing
full well we’ll have to refactor or throw it out.

The Cowboy wins here. They get to validate the idea the soonest for the
least amount spent. Allowing a company to test the market with lots of ideas
and not bet the farm on a single one.

Okay, so “it depends” on the situtation. I think that’s something you knew
already coming into this article, but my challenge to you is that you should
almost always be a cowboy.

  1. Unknown unknowns There are a category of roadblocks that simply cannot be
    known until the project is in motion. This significantly increases the
    likelihood that all code will be refactored anyways. That refactoring will
    benefit from both an existing prototype and knowledge of requiredments that
    could no be known otherwise.
  2. Most of what we write does not matter. Look at code you wrote 6 months ago,
    would you write it the same way? Your code is not going to last forever.
  3. Refactoring will happen naturally as bottlenecks are discovered, and as
    features are expanded upon. Writing the simple version first avoids a whole
    category of premature optimization problems.
  4. There is a huge benefit in repetition. You solving a certainly class of
    problem repeatedly will end up in a great solution that you can readily
    produce. Take this example:

[A] ceramics teacher announced on opening day that he was dividing the class into two groups. All those on the left side of the studio, he said, would be graded solely on the quantity of work they produced, all those on the right solely on its quality. His procedure was simple: on the final day of class he would bring in his bathroom scales and weigh the work of the “quantity” group: fifty pound of pots rated an “A”, forty pounds a “B”, and so on. Those being graded on “quality”, however, needed to produce only one pot — albeit a perfect one — to get an “A”. Well, came grading time and a curious fact emerged: the works of highest quality were all produced by the group being graded for quantity. It seems that while the “quantity” group was busily churning out piles of work – and learning from their mistakes — the “quality” group had sat theorizing about perfection, and in the end had little more to show for their efforts than grandiose theories and a pile of dead clay. [1][2]

Does quality matter?

Yes, uniquivocally. We connot hire an army of interns to build a prototypes
and expect to succeed in the market. Our same equation applies:

Value I Deliver = 0 #  Benefit of the Task
                - Implementation Cost
                - (Refactoring Consequence × 99%) # Risk

If you’re code does not meet the requirements it is not delivering on the value
of the feature. If you’re code always has to be refactored then any value
you’re delivering is going to be short lived.

My favorite method is two fold:

  1. Create the simplest version of the feature that can be delivered.
  2. Reflect and if necessary refactor.

In this way we never comit a huge amount of resources to an unproven idea. Better
developers will deliver great simple features and get a chance to test it against
the market. Junior developers will get feedback on their ipmlementation and how
to improve their approach.

So in summary.

  1. Know the tradeoffs of your approach and weigh situation against them.
  2. Place a high value on iteration time. Simple components are usually better.
  3. Most of our code is unimportant, and the important code will likely be refactored.
  4. Utilize protoypes and be a cowboy.

Learn to look at your code and say, “That was probably the right thing at the
time.”

References


  1. Article ↩︎

  2. Book ↩︎