The Experiment Canvas

For the past several years, I’ve been talking about The Experimentation Mindset.

The talk was written when I was working at Groupon as Global Director of Engineering Culture. The original version of the talk contained content about the experiments we’d run in efforts to improve employee engagement and satisfaction among software engineers.

While we were able to make a significant difference in satisfaction and engagement among the majority of Groupon’s software engineering population, many of our early “experiments” lacked structure and were more opportunistic boundary pushing than designed experiments. But over time, we became more strategic.

In the past few years, we at OnBelay have had the honor of working with other companies on similar efforts and in that time have matured to a more structured approach.

One key tool we use today is the Experiment Canvas. My partner, Diane Zajac, and I co-developed the canvas. It is based heavily on our experience with A3s. It is still a work in progress, but I want to share with you where we are to date. Please feel free to use it and give us feedback.

Experiment Canvas

Experiment Canvas

There are three basic parts to the canvas - Definition, Experiment, and Conclusion.

Definition

This is where we describe the opportunity or issue and provide data in support of the hypothesis and experiment.

Title

Here we define the opportunity or issue in a single sentence. This may be something high-level such as, “Software Engineering attrition rate is above market”, or it may be more specific such as, “Software Engineers are leaving due to lack of growth opportunities”. This is up to you, but we encourage specificity when possible. Smaller, targeted experiments tend to provide a more conclusive return and can be stacked to address a broader issue.

Background

Here we broaden the title and give some history and context. We want to clearly and concisely state why we believe this opportunity or issue is important. “Software Engineering attrition rate is above market” might expand to include a brief narrative on what we believe are the top 3 contributing factors, how the rate is trending, and how long this has been an issue.

Current Situation

Now that we have a bit of history, we take a closer look at the current situation. This is a narrative of where and when along with impact or opportunity. How might we observe this today? Whether this is an opportunity we want to capitalize on, such as spreading a way of on-boarding team members that is working for some pilot teams, or a risk we want to mitigate, such as database security issues due to API implementation, how can we substantiate the opportunity/issue?

Analysis

This section is more detailed.

It includes causal analysis. This might be a fishbone diagram, a 5-whys exercise, or causal loop analysis. I personally prefer fishbone or causal loop over 5-whys. Most of the items we’re dealing with are complex and I find 5-whys to be more deterministically oriented than other approaches.

In this section, we present supporting data. This is done in a readable format that visually tells the story. Ideally, we can use this same visualization to help us see if our results are progressive, neutral, or regressive. Be careful not to manipulate the data to support your hypothesis, but rather inform your hypothesis with the data.

Data pertaining to the history, trending, current state, and impact may also be present here.

Experiment

Here, we define the specific experiment. What are we doing, for whom? What impact do we anticipate? What are the specific steps?

Hypothesis

We’ve been working with a simple format for our hypotheses;

We believe that [doing action/countermeasure]
For [this/these person(s)]
Will achieve [this/these measurable outcome(s)]

For example:

We believe that providing a career lattice
For software engineers
Will achieve a decrease in attrition due to lack of growth opportunities

Whether you use this format or not, try to keep the hypothesis crisp. If you are doing several things or targeting different groups, you are far less likely to be able to determine outcomes/impact.

Experiment

This is the mechanics of the experiment. Things we want to cover might be:

  • What are the specific steps?

  • Who is involved?

  • What roles/responsibilities are needed?

  • What is the anticipated timeline?

  • What are the boundaries - when/why might we stop and experiment “early”?

  • What outcome(s) are we targeting?

  • Will there be a control?

If you can, run a control. Maybe even a double control. A control is a cohort (group) that is statistically similar to the participants, but is not subject to the change. This allows us to better ensure that our outcomes are related to the experiment and not a shift due to some other factor(s). Maybe we discover that attrition due to lack of growth opportunities stays flat in the experiment group. We might conclude that the experiment had a neutral affect. But, if the control group experiences an increase during the same time period, we might now see that our result was progressive relative to control while neutral relative to baseline.

Measurement

Here we cover how we will measure the experiment. Make a concerted effort to tie measurement to the targeted outcome. Consider how the data will be collected and reported. can we get enough data to inform the experiment in a statistically significant way? Can we gather the data in a timely enough fashion to support the experiment? Additionally consider what bias might exist in the process of collection.

Try not to look at what you can measure, but first consider what you want to measure. Then, figure out how that could happen.

In an ideal situation, the data you need is already being collected, has been used to substantiate your history and current state, and can be easily collected for the experiment.

Conclusion

This is the last section of the canvas and, once complete, indicates the end of the experiment.

Outcome(s)

Here we share the measurements. Again, in a readable format that visually tells the story. Hopefully, this is the same data we showed in the analysis. We should be able to see baseline and final reports. If there was a control, we should see their data and a contrast of control versus experiment cohort.

Provide a narrative of the conclusions. What did we determine based on the data?

Was the experiment progressive, neutral, or regressive?

A progressive experiment achieved measurable outcomes that substantiate the hypothesis and moved us toward opportunity or resolve.

A neutral experiment is inconclusive or invalidates the hypothesis via measurable outcomes that are neither progressive or regressive.

A regressive experiment invalidates the hypothesis by achieving measurable outcomes that moved us in a direction counter to the hypothesis.

A regressive experiment may not be negative. A direction counter to the hypothesis is still a learning opportunity.

Next Steps

Here we indicate what is next.

If the experiment was progressive, do we plan to spread this to the organization? Will we run more experiments in different contexts or are we ready to make this a system and implement it?

If the experiment was neutral, do we discontinue the behaviors/activities in the experiment group or leave them in place?

If the experiment was regressive, are there counter-measures needed to correct the situation? What experiment do we want to run next, if any? What impact does this experiment have on the next - Has current state changed?

Feedback Welcome

What are your thoughts? What are we missing? Is this too rigid or perhaps not rigid enough? What have you tried that has worked well for you?