Ditch the spreadsheet for content analysis.

Experiment and innovate toward high digital impact

Last updated September 7th, 2018. First published

Key Points

  • If only one team “wins” by doing an experiment, then it isn’t that useful of an experiment from a website product management perspective.
  • We should always be swinging for the fences: only do experiments with an eye toward scaling up.
  • Experiments should either be unwound or promoted at their conclusion.
Related resource
Microsite Checklist | Use this one page checklist to decide whether to create that microsite

Experimentation is one of the reasons to have a regular cycle of website changes, so that we are poised for frequent innovation. That said, experimentation should be done while thinking long-term and broadly about your web presence so that we have much higher impact.

The Wikipedia definition is useful here: “An experiment is a procedure carried out to verify, refute, or establish the validity of a hypothesis.” 

Often the word “experiment” is used casually by a specific team in order to introduce some innovation that they want on their particular part of a larger web presence. But if only one team “wins” by doing an experiment, then it isn’t that useful of an experiment from a website product management perspective (since it is so localized). Also, if a change is only to introduce something new for the sake of seeing sparkly changes then we aren’t really testing a hypothesis.

Plan for the possibility of unwinding an experiment

Remember, experiments in innovation don't always work out. That's ok and normal. The possibility of unwinding an experiment (like taking down a microsite where you were attempting something new) should be considered when it's started.

We need to do three things for strong experiments:

  1. We have defined what we are testing.

  2. Implementing the experiment (and deciding whether to do it at all) is part of the normal ongoing change cycle.

  3. We either unwind experiments that disprove the hypothesis or promote those that are proven.

#1 We have defined what we are testing

Although ideally the experiment would have some way of numerically determining success (like an A/B test), at other times we may wish to be more qualitative (for instance, testing whether a different way of publishing is easier for content contributors). But we should at least know what our hypothesis is, and how we will evaluate whether it is confirmed or disproved.

#2 Implementing the experiment is part of the normal ongoing change cycle

Some change should be fast and other slow, so obviously some experimentation should be possible immediately. For example, potentially changing the writing style could be done directly by a content team on the next content they publish (although of course this depends on your standards / guidelines). But the change that this article is mostly concerned about is more structural in nature, like changes to templates or functionality (the type of change that, for example, might tempt teams to launch a one-off microsite. This type of change should be considered against all the other candidates for change (and we always face a wall of potential changes), and only implemented if it should be prioritized against less-deserving changes. Furthermore, sometimes the experiment approach could be changed to make it more likely to be included in the work program. For example, if an experiment can be implemented in a way that could easily be deployed to other parts of the site then that experiment would be a higher priority.

#3 We either unwind experiments that disprove the hypothesis or promote those that are proven

Consider the team that wants to try a new method of presenting a data table (let’s say it involves javascript and the standards disallow page-specific javascript). An early step is defining how the experiment might be run. One possibility is simply to open up the text editor to allow javascript and the team puts in whatever code they want. Another would be to modify the existing table editor to allow new ways of presenting the data (perhaps not radically different, but testing one small difference). The one-off allows more radical changes to the one page but is a smaller test (since it is just one page). Changing the table editor also makes sure that rolling this out to other teams/sites would be more straightforward (or it’s just available to all teams immediately for testing). That said, the implementation of the experiment may be quite different than the eventual full implementation if the hypothesis is proven.

Regardless of the approach, we don’t want to clutter our web presence with a variety of incoherent non-experiments that seemed like a good idea at the time but now just demonstrate an incoherent web presence.  

So at the conclusion of the experiment, we could take one of three possible next steps:

  • Unwind the experiment, especially if the hypothesis is disproven but potentially even if the hypothesis is proven but you decide not to promote the change. Possible ways of unwinding are: rolling back to the way the content was before the experiment, explicitly labeling it as a test, or deleting it. Remember, experiments in innovation don't always work out. That's ok and normal. The possibility of unwinding an experiment should be considered when it's started.

  • Promote the improvement proven in the experiment. The most satisfying next step would be to promote the new change broadly across your web presence. In fact, we shouldn’t even be doing an experiment if we don’t think we would want to do this (exception below).

  • Leave the experiment as-is (RARELY DO THIS!). The only reason to leave an experiment in place is if it is really, truly stand-alone, not only in how your organization treats it but also how the site visitor relates to it. So for example, if you have two completely different audiences that never overlap then perhaps you would leave an experiment in place without trying to roll it out wider.

Aside from pure content experiments that can be done within your standards, consider the following when experimenting (this checklist is taken directly from Website Product Management):

A checklist for digital innovation experiments

  • Implement in a way that could reasonably be extended to other pages / sections / sites.
  • It does not benefit just one group.
  • You have defined what you are testing.
  • Implementing the experiment is considered a part of the normal ongoing change cycle.
  • You have figured out how to “unwind” the experiment if it does not provide the desired results.
Also: Change Request Flowchart
Microsite Checklist Use this one page checklist to decide whether to create that microsite