This is an article about agile product management and release planning.
Change and Uncertainty
In the dark ages before your team became agile, you would make estimates and commitments. You never exactly met your commitments, and no one really noticed. That was how the game was played. You made a commitment, everyone knew it would be wrong, but they expected it anyway. Maybe your boss handicapped your commitment, removing scope, lowering expectations, padding the schedule. Heck, that’s been the recipe for success since they planned the pyramids.
It makes sense.
- Your early estimates are wrong. When you add them up, the total will be wrong. If you do PERT estimation, the law of large numbers will help you in aggregate. But you’ll still be wrong.
- The outside demands on, and availability of, your people will change. Unplanned sick time, attrition, levels of commitment over time, lots of “people stuff” is really unknown.
- The needs of your customers will change. Markets evolve over time. You get smarter, your competitors get better, your customer’s expectations change.
Agile processes are designed to help you deliver what your customer actually needs, not what was originally asked for. Contrast the two worlds.
In the old world, you would commit to delivering a couple pyramids. After spending double your budget, with double the project duration, you would have delivered one pyramid. When you deliver it, you find out that sphinxes are all the rage. Oops.
Your team changed to agile, so that you could deliver the sphinx. But your Pharaoh still wants a commitment to deliver a couple pyramids (the smart ones will be expecting to get just one). You can stay true to agile, and still mollify your boss’ need to have a commitment, if you take advantage of the first-principles of why agile estimation works.
A commitment is a factual prediction of the future. ”This will take two weeks.” Nobody is prescient.
A factual prediction has to be nuanced. ”I expect* this will take no more than two weeks.”
*in reality, this is shorthand for a mathematical prediction, such as “I expect, with 95% confidence, that this will take no more than two weeks.”
Few non-scientist, non-engineers, non-mathematicians understand that 95% confidence has a precise meaning. People usually interpret it to mean “a 5% chance that it will take more than two weeks.” What it really means is that if this exact same task were performed twenty thousand times (in a hypothetical world, of course), then nineteen thousand of those times, it would be completed in under two weeks – do you feel lucky?
To make a statement like this, you actually have to create a PERT estimate – identifying the best-case, worst-case, and most-likely case for how long a task will take.
Unfortunately, we’re rarely asked to make a commitment about a single task – but rather a large collection of tasks – well-defined, ill-defined, and undefined.
You can combine PERT estimates for the individual tasks, resulting in an overall estimate of the collection of tasks.
The beauty of this approach is that the central limit theorem, and the law of large numbers, work to help you estimate a collection of tasks – you can actually provide better estimates of a group of tasks than a single task. This obviously helps with the well-defined tasks that you know about at the start of the project. This even helps with the ill-defined tasks. Rationalists will argue that the key, then, is to do more up-front research to discover the undefined tasks – and then we’re set. As Frederick Brooks (Mythical Man-Month) points out in The Design of Design, this debate has been going on since Descartes and Locke. It is not a new idea.
Big Up-Front Design and Requirements (BUFD & BUFR) hasn’t worked particularly well, so far.
Don’t throw out the baby with the bath-water, however. The math of estimation is still important and useful. Even if empiricism is not the silver bullet.
Estimation is a form of prediction. Even agile teams do it. In Scrum, you estimate a collection of user stories – in story points that represent complexity, and you predict how many points the team can complete in this sprint. Note the time factor. If you’re working a two-week sprint, there is very little risk of changes in staffing during a two-week period. There’s also very little risk that your market will change significantly in two weeks – and if it does, what are the odds that you will notice and materially change your requirements in two weeks?
Visually, let’s take that PERT estimate and turn it sideways – so we can introduce the dimension of time. Imagine you estimated all of the tasks (well-defined, ill-defined, and a guess about the undefined), as if they were all to happen in the first sprint. Ignore inter-task dependencies, and pretend you had unlimited resources and the ability to perform all tasks in parallel.
The graph above shows the aggregate estimate – the circle is your best prediction, with error bars representing your confidence interval in the estimate. If you were using PERT estimates, these could represent that 5% and 95% confidence lines. Subjectively pick something based on your team’s experience in the domain and your confidence in your guesses (about the undefined tasks).
We need a segue into the “best of waterfall” approach to estimating projects, to steal and invert a good idea.
The Cone of Uncertainty
The folks at Construx have published a nice explanation of the cone of uncertainty – an adaptation of an idea from Steven McConnell’s Software Estimation: Demystifying The Black Art (2006). That article uses his imagery with permission – so please go look at it there. The idea is that as the project becomes better defined (e.g. during the project), the amount of uncertainty is reduced.
The findings show that initial estimates are off by 400% (either low by a factor of 4 or high by a factor of 4)! Even after “nailing down” requirements, estimates are still off by 30% to 50%!
As bad as that sounds, it is actually worse. This is a prediction for the original project (delivering pyramids). Not only are your estimates wrong – but they are bad estimates for delivering the wrong product.
But – the core idea is sound – the further into the future you have to execute, the greater the mistakes in your estimate.
Taking that concept, and applying it to our diagram, we get the following:
The further into the future you are trying to predict, the less accuracy you have in your prediction. This reduction in accuracy is reflected as a widening of the confidence bands for your estimate.
- A couple sprints’ worth of work is not much different than one sprint – so your estimation range is not much changed.
- An entire release of sprints (say 6 to 10 sprints) has much more opportunity for the unknown to rear its head.
Now, your prediction is (probably) unusably vague and imprecise. ”This set of tasks will take X plus or minus a factor of two.”
That’s the reality.
Note: This has always been the reality. People have historically reduced this “risk to timing” by hiding the “risk of change” aspects – and waterfall processes encourage you to deliver the wrong thing, as close to on-time as possible.
That’s not what we want to do, however.
We still want to deliver the (not-yet-defined) right product, as efficiently as possible. That’s the goal of agile. (For folks who haven’t been here at Tyner Blain for long – “right” includes both value and quality). Refinement
Because we’re agile, and we’re willing to “get smarter” about our product over time, we have an opportunity to improve. Because of the nature of compounding estimates and the cone of uncertainty, our uncertainty gets smaller over time.
Let’s remove our artificial simplification that we could do everything “right now” and look at what we think we know right now, about the end of the release.
Our ability to predict the amount of effort (for today’s definition of the product) at the end of the release is not very good.
Our ability to predict (today’s definition of the product) one sprint into the future is much better.
After completing the first sprint, we are a little bit smarter – the ill-defined tasks are better defined. Maybe some of the undefined tasks are now ill-defined. The same cone of uncertainty is now a little bit smaller – we are a little bit smarter, and the time horizon of the release date is a little bit closer.
The trend continues – each sprint gets us closer to the release date, and with each sprint (assuming we get feedback from our customers, and continue to study our markets) we get a little bit smarter. We also get better at predicting the team’s velocity (how much “product” they can deliver during each sprint).
Your boss still wants a commitment, however. And that’s where we get to change the way we look at this (again).
The above diagrams all display how we converge on an estimate for a stable body of work. However, we know that the body of work is constantly changing.
Backlog! [you say]
Yes! The backlog. The backlog is an ordered, prioritized list of user stories and bugs. I was talking with Luke Hohmann of Innovation Games last month, and one of the most popular online Innovation Games is now the one they created based on prioritizing by bang for the buck. Play it today online (for free!). How cool is that?
The backlog represents the work the team is going to do – in the order in which the team is going to do it. Over time, as we get smarter, we will add and remove items from the backlog – because we discover new capabilities that are important, and because we learn that some things aren’t worth doing. We will even re-order the backlog as we recognize shifting priorities in the markets (or in our changing strategy).
As this happens, it turns out that the items at the top of the list are least likely to get displaced, and therefore most likely to still be part of the product by the time we get to the release.
Instead of thinking about uncertainty in terms of how long it takes, think about uncertainty in terms of how much we complete in a fixed amount of time. In agile, generally, we apply a timebox approach to determining what gets built.
Now, uncertainty, instead of manifesting as “when do we finish?” becomes “what will we finish?”
Your boss is rational. She appreciates the constraints, she just wants to know what you can commit. Every boss I’ve worked with has been willing (sometimes only after much discussion) to treat this uncertainty in terms of what instead of when. They acknowledge that they need to translate (usually for their boss) into a “fixed” commitment.
The solution: commit to a subset of what you predict you can complete.
At the start of the release, you may have 500 points worth of stories. Based on your team’s expected velocity, and the number of sprints in the release, you predict that you can complete 320 points worth of stories (5 people on the team, a team velocity of 40 points per sprint, and 8 sprints in the release). Starting at the top of the backlog and working down, draw a cut-line at the last story you can complete (when you reach 320 points). This is your prediction.
Now the commitment part. You’ll have to figure out what you’re comfortable with. Maybe for 8 sprints (say, 16 weeks into the future), you may only be comfortable committing to half that amount – 160 points. Go back to the top of the backlog, and count down until you reach 160 points. Everything above the line is what you commit to delivering.
Maybe you are comfortable committing to 240 points, maybe only 80. This is like playing spades. The more you can commit to, without missing, the better off you are. Your tolerance for risk is different than mine.
You can also negotiate with your boss. Commit to 160 points now, and provide an update after every other sprint. More likely than not, you will be increasing the scope of your commitment with every update.
Mid-project updates of “we can do more” are always better than “we can do less.” And both are better than end-of-project surprises. This also allows you to have updates that look like this:
We didn’t know this at the start of the release, but X is really important to our customers – and we will be able to deliver X in addition to what we already committed. Without slipping the release date.
Making commitments with an agile process is not impossible. It just needs to be approached differently (if you want to stay true to agile). The end result: better predictions, more realistic commitments, and the likelihood that each update will be good news instead of bad.
Don’t forget to share!
Reference: Agile Estimation, Prediction, and Commitment from our JCG partner Scott Sehlhorst at the Business Analysis | Product Management | Software Requirements blog.
Author David Gassner explores Java SE (Standard Edition), the language used to build mobile apps for Android devices, enterprise server applications, and more!
The course demonstrates how to install both Java and the Eclipse IDE and dives into the particulars of programming. The course also explains the fundamentals of Java, from creating simple variables, assigning values, and declaring methods to working with strings, arrays, and subclasses; reading and writing to text files; and implementing object oriented programming concepts. Exercise files are included with the course.