Analysis vs. Doing Something

How long should a team spend analyzing something before doing something?

A team I work with recently used a “spike” to investigate how to implement a new system capability. The spike was open for two sprints, although I am not sure how much effort was expended over the course of those two sprints. The fact of this spike being open for two sprints made me curious about the trade-offs made… given the agile value of Working Software over Comprehensive Documentation, was the team leaning too far in the direction of comprehensive documentation? And if so, how would they know? 

In thinking about this trade-off, I like the following thought experiment… what would each extreme look like?

Working Software – Do an experiment

At this extreme, with just the intention and then-known acceptance criteria for the feature (story), the team could simply start making the changes to support the feature, and then – with those changes made – start exploring the implications. What happens here and here and here with the implementation we have done so far? What does that tell us about what we have not thought about yet?


  • Something is working sooner, on which feedback can be gathered.
  • It may be easier to find impacted areas and corner conditions once an implementation has started.
  • Learning that occurs is based on the experience of implementing, rather than just theory.


  • Implementation may go down a “blind alley” – creating rework (having to backtrack and re-implement).
  • The partial implementation of an initial pass at the story may leave the software in an unreleasable state (e.g. the side effects of what isn’t yet done would be unacceptable to give to customers).
  • Easy to miss aspects and implications that would have been obvious with a small amount of pre-analysis.
  • After starting, might decide that the whole feature needs to be scrapped or re-thought.

Comprehensive Documentation – Do complete analysis before starting

At this extreme, development isn’t started until the feature and its implications have been thoroughly explored, reviewed and documented.


  • Can catch many/most implications and corner conditions.
  • Less likely to go down a blind alley and have to rework.
  • Can review theoretical approaches and implications with stakeholders.


  • Longer cycle time before something working is available to review.
  • In a complex system, some implications and impacts will still only be discovered once implementation is under way.
  • Approaches and implications remain theoretical and abstract.

Where’s the middle ground? What is reasonable?

The above extremes suggest places to look for a middle ground… not as a hard-and-fast rule, but as an exploration of conditions and approaches…

  • Timebox investigation: Can exploration be timeboxed to a couple of hours, or to some solo investigation plus a meeting with knowledgeable people? At what point does additional investigation and documentation time reach a point of diminishing returns? What’s “just enough” investigation?
  • Story splitting: Do we know enough to write the first story (first slice) and then use that as the basis to figure out what to do next?
  • Learning-driven implementation: Can the initial implementation steps be driven by learning goals? (How much would we need to implement in order to have something we could use to learn? What could be created that is valuable both to the customer and for learning?)
  • Cost of being wrong: How much effort is likely to be expended in backtracking if the initial approach turns out to be a blind alley? Does the team’s pursuit of technical excellence help keep the cost of change low?
  • Partial completion: Is a partially-completed feature likely to be acceptable or unacceptable? If unacceptable, can a feature switch be used to hide the new functionality before it is fully ready? Or (less preferred), can initial work be done in a branch?
  • Team acceptance: How comfortable is the team with uncertainty? What steps might help the team become more comfortable with an experimental approach?

Moving Forward

Did the team do the “right thing” in the case I cited? Is there even a “right thing”? I am not close enough to the case to really say. What it did prompt me to do is to consider the trade-offs and explore how a team might decide to approach such cases in the future.

What about you? How do you decide “how much” analysis to do for a feature, before starting development?  

Originally published October 18th, 2018 on the Innovative Software Engineering blog. Republished with permission.