Deadlines and Agility

I was recently asked to engage in a debate over whether or not there are deadlines in agile. There were a few folks involved in the debate and the predominant perspective seemed to be that true agile efforts have no external deadlines - all deadlines are self-imposed by the team in the form of an iteration commitment or a scope negotiation with the Product Owner.

This is bunk.

Comments rarely improve code

Comments rarely improve code

The debate over comments in code is ongoing. At least once per year for the last 30 years, I’ve been involved in a discussion on the subject - often accidentally and reluctantly. To be honest, my perspective has changed over time. I used to comment every method, I used to comment any line of code that was “weird”, and I used to comment any blocks of code that were too complicated. Today, I rarely comment, if ever. Over time, I’ve come to realize that most comments are unnecessary.

WIP, Throughput, and Little’s Law

In a prior post, we talked about why we should manage WIP. We showed that we can use a future value calculation to give us an idea of how long it will take to complete multiple items.

While our future value calculation is both informative and interesting, it is not particularly useful beyond making the point that doing more at once takes more time. What we want to know is how does this materially impact our ability to make software. For that, we can look to a more simple (and useful) calculation based on Little’s Law

Why Manage WIP?

Why Manage WIP?

Having too much Work in Process, also known as Work in Progress (WIP), is a remarkably common issue. In my experience, management often encourages this behavior. I don’t know if it is the notion that we will get more done if we work on more things simultaneously. Or perhaps there is a fear we won’t get enough things done unless we work on several of them at once.

Measuring Agile Efficiency

Measuring Agile Efficiency

This blog post is inspired by another Quora question; “What metrics do you use to track Agile Efficiency?”

To begin with, I want to state that if I had to choose between efficient and effective, I’d choose effective. Efficiency is often about output (how many widgets per hour), whereas Effectiveness is often about outcome (was the purpose consistently met).

Agility is about responding to change. Efficiency is achieved by driving out variation. An over-focus on efficiency will lead down a path of standardization and control, making for a less agile system.

That said, given the question was specifically about agile efficiency, I’d look at a few things - Throughput, Cycle Time, Deployment Frequency, and Mean Time to Recovery.

Removing Code Duplication

Today’s offering is another post inspired by a question on Quora about when to refactor away duplicate code.

The specific question was, “What is the limit for duplicate code to start refactoring?”

I took this to mean, “How much duplication needs to exist before you should refactor it?”

And I’m not sure that’s really a great question. So I decided to start with clarifying what duplication needs to be cleaned up and then when might that happen.

When to Refactor Your Code

For me, refactoring is when we change the implementation of a piece of code without changing the behavior. That’s the entire definition - Change the implementation, but not the behavior.

As a result, I will generally not refactor code unless it has a solid set of tests around it. If I see code that needs refactoring, but has insufficient tests, I will add character tests.

Metrics Misuse - Goodhart's Law

Now, metrics are not bad. But, they are often used in bad ways.

It might help to be aware of some of the side effects of mismanagement of metrics. From inadvertently creating behaviors that actively work against our best interest, to altering the meaning of the metric, mismanagement can do more harm than good.

Goodhart’s Law

Charles Goodhart is an economist and former advisor to the Bank of England. In 1975, Goodhart delivered two papers to a conference at the Reserve Bank of Australia. In those papers, Goodhart was discussing research and theory related to monetary policy and control in the United Kingdom. In the years leading up to 1975, existing monetary targets and the controls used to achieve the goals were no longer producing the results desired or expected. There had been what most considered to be evidence of a stable money demand in the United Kingdom. It was believed that the growth of money could be controlled through the setting of short-term interest rates. Higher interest rates correlated with lower money growth.

Goodhart warned, however, that policies and practices based on specific targets were flawed. Goodhart stated,

“Any statistical regularity will tend to collapse once pressure is placed upon it for control purposes.”

Any statistical regularity will tend to collapse once pressure is placed upon it for control purposes.
— Charles Goodhart

A common paraphrasing is, “When a measure becomes a target, it ceases to be a good measure.” When I talk about this, I tend to add, “And the target therefore no longer means what you think it does.”

Goodhart’s law is a critical piece of information when we think about metrics. No matter how tempting it might be, the moment we set a target for a measure, we’ve changed the system, thereby changing what the measurement means, thereby changing what the target means.

The lesson here is pretty simple. Don’t set targets for metrics. And please don’t give teams incentives towards targets if you do set them. I know. I know. Management 101 says this works. But, science says it doesn’t. Seriously. Setting targets and providing incentives for knowledge work lowers performance. Don’t do it.

Instead, provide guidelines to the teams. My favorite guideline for metrics is, “Monitor trending. Dig in when the trend changes and you aren’t absolutely certain why.”

This article is an excerpt from the book, “Escape Velocity”, available on LeanPub, Amazon, and elsewhere.

Refactor - You Keep Using That Word…

I stumbled upon a thread recently where the question was posed, “What are some common mistakes when refactoring code?”

The answers were interesting. The more I read, the more I realized that folks weren’t talking about the same thing. They were all saying “refactor”, but many were describing scenarios that sounded more like a rewrite rather than a refactor. This is not uncommon. I’ve encountered this on multiple forums, in slack discussions, in blog posts, and in actual human to human conversation (it happens).

Metric Misuse - The Hawthorne Effect

Now, metics are not bad. But, they are often used in bad ways.

It might help to be aware of some of the side effects of mismanagement of metrics. From inadvertently creating behaviors that actively work against our best interest, to altering the meaning of the metric, mismanagement can do more harm than good.

The Hawthorne Effect

Western Electric had commissioned an extensive study that ran from 1924 to 1932 at their Hawthorne Works in Cicero, IL. The intent of the study was to determine the impact of ambient lighting on worker productivity. Would employees be more productive under high levels of light or low levels of light? The workers in the factory were divided into two groups based on physical location within the plant. For one group, the lighting was increased dramatically while for the other group (the control) lighting levels remained the same. Researchers found that productivity improved among the group for whom lighting changed whereas the control group had no statistically significant change.

Employee working conditions were then changed in other ways. Working hours were adjusted, rest breaks were changed, floors were rearranged, workstations were kept cleaner, and several other adjustments were made, including returning the lighting back to normal levels and changing practices and policies back to original standards.

With every change, productivity made small improvements. By early 1932, and the end of the studies, the factory productivity was at an all-time high and employee attendance and retention were at record-setting levels. Some groups seemed to do better than others, but across the factory, all measures were improved.

When the studies ended, productivity, attendance, and retention soon returned to original levels.

The key takeaway from the Hawthorne studies is - that which gets measured will improve, at least temporarily. “The Hawthorne Effect” is described as the phenomenon in which subjects in behavioral studies change their performance in response to being observed.

This, at first, seems like a precious nugget of management gold.

  1. Measure productivity.

  2. Make it known.

  3. Ka-Pow! Increased productivity.

The perfect management formula.

But the reality was (and is), that while that which is being measured shows improvement, it does not mean the overall system has improved. Working longer hours can lead to employee fatigue and burn out, as well as lower quality. Lack of attention in areas not measured, such as quality or workplace safety, can lead to other negative outcomes.

If your team is slacking so significantly that merely measuring their velocity can result in a marked increase in velocity with no ill- effects, then you’ve a more serious issue at play than velocity.

What’s more, there is no guarantee that the thing being measured has actually improved. Velocity might have gone up because the team inflated story points. We should rephrase the key takeaway to that which gets measured will (appear to) improve.

This article is an excerpt from the book, “Escape Velocity”, available on LeanPub, Amazon, and elsewhere.