Agile

Three Alternatives for Making Smaller Stories

When I was in Israel a couple of weeks ago teaching workshops, one of the big problems people had was large stories. Why was this a problem? If your stories are large, you can’t show progress, and more importantly, you can’t change.

For me, the point of agile is the transparency—hey, look at what we’ve done!—and the ability to change. You can change the items in the backlog for the next iteration if you are working in iterations. You can change the project portfolio. You can change the features. But, you can’t change anything if you continue to drag on and on and on for a give feature. You’re not transparent if you keep developing a feature. You become a black hole.

Managers start to ask, “What you guys doing? When will you be done? How much will this feature cost?” Do you see where you need to estimate more if the feature is large? Of course, the larger the feature, the more you need to estimate and the more difficult it is to estimate well.

Pawel Brodzinski said this quite well last year at the Agile conference, with his favorite estimation scale. Anything other than a size 1 was either too big or the team had no clue.

The reason Pawel and I and many other people like very small stories—size of 1—means that you deliver something every day or more often. You have transparency. You don’t invest a ton of work without getting feedback on the work.

The people I met a couple of weeks ago felt (and were) stuck. One guy was doing intricate SQL queries. He thought that there was no value until the entire query was done. Nope, that’s where he is incorrect. There is value in interim results. Why? How else would you debug the query? How else would you discover if you had the database set up correctly for product performance?

I suggested that every single atomic transaction was a valuable piece. That the way to build small stories was to separate this hairy SQL statement was at the atomic transaction. I bet there are other ways, but that was a good start. He got that aha look, so I am sure he will think of other ways.

Another guy was doing algorithm development. Now, I know one issue with algorithm development is you have to keep testing performance or reliability or something else when you do the development. Otherwise, you fall off the deep end. You have an algorithm tuned for one aspect of the system, but not another one. The way I’ve done this in the past is to support algorithm development with a variety of tests.

testingcontinuumThis is the testing continuum from Manage It! Your Guide to Modern, Pragmatic Project Management. See the unit and component testing parts? If you do algorithm development, you need to test each piece of the algorithm—the inner loop, the next outer loop, repeat for each loop—with some sort of unit test, then component test, then as a feature. And, you can do system level testing for the algorithm itself.

Back when I tested machine vision systems, I was the system tester for an algorithm we wanted to go “faster.” I created the golden master tests and measured the performance. I gave my tests to the developers. Then, as they changed the inner loops, they created their own unit tests. (No, we were not smart enough to do test-driven development. You can be.) I helped create the component-level tests for the next-level-up tests. We could run each new potential algorithm against the golden master and see if the new algorithm was better or not.

I realize that you don’t have a product until everything works. This is like saying in math that you don’t have an answer until you have the finished the entire calculation. And, you are allowed—in fact, I encourage you—to show your interim work. How else can you know if you are making progress?

Another participant said that he was special. (Each and every one of you is special. Don’t you know that by now??) He was doing firmware development. I asked if he simulated the firmware before he downloaded to the device. “Of course!” he said. “So, simulate in smaller batches,” I suggested. He got that far-off look. You know that look, the one that says, “Why didn’t I think of that?”

He didn’t think of it because it requires changes to their simulator. He’s not an idiot. Their simulator is built for an entire system, not small batches. The simulator assumes waterfall, not agile. They have some technical debt there.

Here are the three ways, in case you weren’t clear:

  1. Use atomic transactions as a way to show value when you have a big honking transaction. Use tests for each atomic transaction to support your work and understand if you have the correct performance on each transaction.
  2. Break apart algorithm development, as in “show your work.” Support your algorithm development with tests, especially if you have loops.
  3. Simulate in small batches when you have hardware or firmware. Use tests to support your work.

You want to deliver value in your projects. Short stories allow you to do this. Long stories stop your momentum. The longer your project, and the more teams (if you work on a program), the more you need to keep your stories short. Try these alternatives.

Do you have other scenarios I haven’t discussed? Ask away in the comments.

Johanna Rothman

Johanna consults, speaks, and writes about managing product development. She helps managers and leaders do reasonable things that work. You can read more of her writings at jrothman.com.
Subscribe
Notify of
guest

This site uses Akismet to reduce spam. Learn how your comment data is processed.

0 Comments
Inline Feedbacks
View all comments
Back to top button