How to measure progress on Agile projects

This question pops up again and again on Agile-focused LinkedIn sites.

Unfortunately many agilists have anti-project or anti-management tendencies, often for good historical reasons (such as being on too many projects that turned into ‘death marches’). Asking a question about measurement of progress is VERBOTEN as it implies management.

This is despite the fact that Agile is founded on Empiricsm, the concept that learning can only be founded on evidence or the experience of the senses. The basis of Scrum is Transparency, Inspection, and Adaptation. Transparency is created by sharing of all relevant information and use of ‘information radiators’ such as highly visible task boards. The team inspects and adapts several times during a sprint (daily stand up, sprint review, and sprint retrospective).

The Scrum Guide even goes so far as to include a section on monitoring progress where it states:

“At any point in time, the total work remaining to reach a goal can be summed. The Product Owner tracks this total work remaining at least every Sprint Review”

Given this, measurement of progress should not be a forbidden topic, nor should those investing in Agile projects be kept in the dark about how to measure and track progress.

Velocity: The simple, and wrong, way to measure progress

Most often people new to Agile focus on velocity, often without fully understanding what it is (and what it isn’t). Agile projects typically use relative estimation, which means velocity can’t be compared to, say, speed in a standardised measure such as miles per hour.

Why not?

Because the number of points assigned to a piece of work (a ‘story’) have no direct meaning outside of that team. One team may say a story is a size 5, another may size the same story as a size 3 or a size 8 (half or twice as large respectively). So if one team does 50 points in a sprint and another does 80, it may just be that they have used different sizes for what is objectively the same amount of work. But it could also mean that one team has done twice as much work as the other, you never know without understanding more about the team.

Image from Alan Biglow, Unsplash.com

Aside from not being able to use velocity to compare teams, you also can’t use it to show progress at an aggregate level (say for a project that has 3 teams).

It is useful for the team to use velocity to measure themselves. When they are in the sprint the team will use a burn down to show how far they are towards achieving their sprint goal, and across sprints it can tell them if they are improving their ability to deliver more, or deliver more consistently (a release burn down).

Business value: The best measure of progress

At the other end of the scale is measurement of business value, be it increased revenue, increased efficiency, or simply the ability to do something you couldn’t do before.

This is what Agile is all about, the delivery of small amounts of value incrementally. It enables an organisation to realise value quickly and often, rather than waiting until the end of a project to get all the value at once (the ‘best case’ under Waterfall).

However, incremental delivery of value is hard work, especially for a team or an organisation new to agility. It requires a high level of skill across a number of areas that will be covered in future blog posts:

  • Emergent design
  • Iterating a solution to add more functionality over time
  • Delivery at speed, only possible using modern technical practices such as Test-driven Development (TDD) and Continuous Integration (CI)

Lead Time is a concrete measure of value that is independent of the actual realised value (which may not be immediately measurable) and is very useful as a way of measuring the efficiency of your organisation. It is simply the elapsed time from idea to implementation.

Not only does it measure the efficiency of your delivery team, it measures the efficiency of your ideation, funding, and governance processes. Organisations that take weeks or months to approve a small piece of work will have long Lead Times. Those whose governance and funding models are nimble will have short Lead Times. Short Lead Time is critical for competing in a Digitally Transformed world where customers expect personalised, on-demand experiences.

So how do you measure Lead Time? Easy, simply measure the time in days or weeks from ideation to implementation for each initiative you conduct. This should also include the time spent writing funding cases and getting approval to start. If you want a real fright, use value stream mapping on your project approval process and/or your gating (“QA”) processes before production releases.

Other useful measures of progress

Most teams, projects, or organisations are somewhere in the middle; they don’t want to misuse velocity, but they still have some work to go on improving their nimbleness.

For more nuanced use of velocity you have to know how big the journey is. Its no good knowing you are averaging 50 points a sprint if you don’t know how many points are required until you are done. You would make very different decisions if you had 200 points to go or 2,000.

So how do I size the whole journey?

You can use techniques like hierarchical relative estimation, affinity estimation or bucket estimation.

These techniques quickly get you to the point of understanding roughly how far the journey is without wasting too much time in estimation of items you may or may not get to. Remember, an estimate is not a fact, it is a guess. Like many uncertain things, a little effort can reduce a lot of uncertainty, but no amount of effort will completely remove uncertainty (short of actually performing the task that is!).

What you should end up with is a list like this. This piece of work is currently sized at 408 points.

Total Story Points

Be careful with this number. Its your best guess as at the time of estimation, and there are lots of large sized stories (or epics). When a story is larger than a size 8 it means there is a lot of uncertainty (a.k.a. risk). When the team knows more about the domain a single size 13 story might be broken down into three size 5s and two size 3s. Or it might end up being a single size 8, you never know until you get there and know more (Empiricism, remember).

And how do I show progress?

I find using a burn up chart for the project much more useful than a burn down for the simple reason that it shows changes in scope (total number of points) much more easily. The chart tracks progress against total scope and is typically updated at the end of each sprint. It doesn’t make sense to update it any more often than that, and a sprint should only be up to four weeks long anyway.

A burn up chart has two lines, a total scope line (which is the total number of points in your backlog) and a trend line (which is calculated once you have at least three velocity readings on your team – after 3 sprints).

Release Burnup

The total scope line will change, and that’s OK, it changes because the project has learnt more about what people would like it to deliver. This may be new features (what project managers often call ‘scope creep’) or it could be new knowledge about existing features resulting in higher story point estimates. I’ll explain why this is not the end of the world in another post. At the moment this project has a total scope of 408 points and a forecast finish around sprint 13.

The important thing to recognise is that the total scope line very transparently shows changes in total estimated work (up or down) and it enables conversations to occur.

The intersect of the trend line with the total scope line shows the currently forecasted sprint whereby the whole backlog could be delivered. This line is based on current knowledge and is likely to shift as the project progresses.

If you want the lines to intersect with each other earlier you can a) reduce scope if you are a time-driven project b) increase time and cost if you are scope-driven or c) tweak your Definition of Done (agreed quality standards) if you would like to fix scope and cost.

In the example below the Product Owner or Sponsor made the call to close the project after 10 sprints as they received information that a competitor was also developing a similar product and might beat them to market. The team helped by running a quick session to validate that a simpler product could be built within that timeframe. The de-scoping is fully transparent as shown by the sudden drop on the blue total scope line.

Release Burnup-After

At the end of each sprint the burn-up can be reviewed and further decisions taken based on up to date progress information. It may be that more scope needs to be de-prioritised to be first to market, or the team could have a few stories that were easier than expected and more scope could be added. Because of the high transparency, and because you ensure your product is releasable at the end of each sprint, you have the choice.

Very simple, just requires grown up conversations based on actual data!