Software Development

Software Estimation is not a Black Art

Construx Software is one of the few companies that take the problems of estimation in software development seriously. Earlier this year I attended Steve McConnell’s Software Estimation in Depth course based on his book Software Estimation: Demystifying the Black Art. Two days reviewing factors in estimation and the causes of errors in estimating, the differences between estimates and targets and commitments, how to present estimates to the customer and to management, the importance of feedback and data, estimating techniques that work and why they work, and when and how to apply these techniques to different estimating problems.

You can get a lot out of reading Steve’s book, but the course setting gives you a chance to work with other people and solve real problems in estimation and planning, dig deep into questions with the man himself, and it introduces new material and research not included in the book. The key takeways from this course for me were:

The Basics

Even simple techniques work much better than “expert judgment” (aka wild ass guesses). With training and using straightforward tools and simple mathematical models and following a bit of structure, you can get to 70-75% accuracy in estimating, which is good enough for most of us.

Never estimate on the spot – it is professionally irresponsible, and you’re setting yourself, your team, and your customer up to fail. Always give the person who is doing the estimate time to think. Everyone knows this, but it’s important to be reminded of it, especially under pressure.

People’s systemic error tendency is to underestimate. Many organizations underestimate by a factor of at least 2x. This also applies to comparing back to actual costs: a common mistake is to remember how much you estimated something would take, not how long it actually took.

Estimates are always better if you can use real data to calibrate them: to compare estimates against evidence of how your organization has performed in the past. I knew this. But what I didn’t know is that you don’t need a lot of data points: even a few weeks of data can be useful, especially if it contradicts with judgment and forces you to question your assumptions.

Use different techniques at different stages as you move along a project’s Cone of Uncertainty, depending on how much information you have available at the time, and how high the stakes are – what the cost of estimation errors could be to the business. If you need higher confidence or higher quality estimates, use multiple techniques and compare the results.

I like T-Shirt sizing to help in planning and making a business case. Developers come up with a rough order of magnitude estimate on cost or effort (small, medium, large, extra-large) while the Customer/Product Owner/Product Manager does the same for the expected business value of a feature or change request. Then you match them up: Extra-Large business value and Small development cost gets a big thumbs-up. Small business value and Extra-Large cost gets a big thumbs down.

Estimating by analogy – comparing the work that you need to estimate to something similar that you’ve done before – is a simple technique that we use a lot. It works well if you are adding “another screen”, writing “another report”, or building “another interface”. It’s a good fit for maintenance, if you’ve been working with the same code for a while and know most of the gotchas.

Intact teams (like mine) tend to view problems in a similar way – ideas and experience converge. This is good if you work on similar problems, in maintenance or a long-running project, because you can make decisions quickly and confidently. But this is bad if you are confronted with new problems – you need techniques like Wideband Delphi to challenge people’s thinking patterns and let new ideas in.

Agile Estimating and Story Points and Velocity

We spent more time than I expected exploring issues in estimating Agile development work. Incremental and iterative planning in Agile development poses problems for a lot of organizations. The business/customer needs to know when the system will be ready and how much it will cost so that they can make their own commitments, but the team ideally wants to work this out iteratively, as they go. This means instead that they have to define the scope and cost as much as they can upfront, and then work out the details in each sprint – more like staged or spiral development methods. Once they have the backlog sized, they can plan out sprints based on forecast velocity or historical velocity – if they can figure this out.

I’m still not convinced that Agile story point estimating is better than (or as good as) other techniques, or that programmers sitting around a table playing Planning Poker is really an effective alternative to thoughtful estimating. Story points create some problems in planning new project development, because they are purposefully abstract – too abstract to be useful in helping to make commitments to the business. You might have an idea of how many story points give or take you have to deliver, but what’s a story point in the real world? People who use story points can’t seem to agree on how to define a story point, what a story point means and how or when to use them in estimating.

More fundamentally, you can’t know what a story point costs until the team starts to deliver, by measuring the team’s velocity (the actual story points completed in an iteration).This leaves you with a bootstrapping problem: you can’t estimate how long it is going to take to do something until you do it. You can look at data from other projects (if you’ve tracked this data), but you’re not supposed to compare story points across projects because each project and each team is unique, and their idea of story points may be different. So you have to make an educated guess as to how long it is going to take you to deliver what’s in front of you, and now we’re back to the “expert judgement” problem.

The good news is that it won’t take long to calibrate Story Point estimates against evidence from the team’s actual velocity. Mike Cohn in Succeeding with Agile says that you need at least 5 iterations to establish confidence in a team’s velocity. But Construx has found that you can have a good understanding of a team’s velocity in as little as 2 iterations. That’s not a long time to wait for some kind of answer on how long it should take to get the work that you know that you have in front of you done.

There is more to estimating

There is a lot more to estimating, and to this estimation course: details on different techniques and rules of thumb, models and software tools, how to measure estimation error, how to turn estimates into schedules, how to handle schedule compression and how to stay out of the Impossible Zone. To get a taste of the course, check out this webinar on the 10 Deadly Sins of Software Estimation.

Software estimation, like other problems in managing software development, doesn’t require a lot of advanced technical knowledge. It comes down to relatively simple ideas and practices that need to be consistently applied; to fundamentals and discipline. That doesn’t mean that it is easy to do it right, but it’s not dark magic either.

Reference: Estimation is not a Black Art from our JCG partner at the Building Real Software blog.

Related Articles :

Jim Bird

Jim is an experienced CTO, software development manager and project manager, who has worked on high-performance, high-reliability mission-critical systems for many years, as well as building software development tools. His current interests include scaling Lean and Agile software development methodologies, software security and software assurance.
Subscribe
Notify of
guest

This site uses Akismet to reduce spam. Learn how your comment data is processed.

1 Comment
Oldest
Newest Most Voted
Inline Feedbacks
View all comments
Dalip S. Mahal
10 years ago

For those interested, Capers Jones has measured that automated sizing tools for software will increase productivity by 16.5% and code quality by 23.7%.

In addition, proper estimates will reduce the likelihood of excessive schedule pressure. Excessive schedule pressure reduces productivity by 16.0% and code quality by 22.5%.

The paper can be found here (http://bit.ly/CJ_BestWorstPractices).

Back to top button