“No cake this week”, I said as the team failed their sprint once again. The usual reasons of “if we just had another 2 hours” or “it’s a simple change we just needed to test it…” came through in the retro. No management success cake was once again a harsh reality.
The team had been together a while and I was the new kid on the block. It was important for me to be sensitive to the situation and build trust. The worst thing to do is barge in and spout off about ‘how you’ve seen this all before’, and start implementing a ‘better way’.
There wasn’t a physical board with sticky’s on, so each morning a cranky old TV was wheeled out. After a dodgy cable was jiggled into a laptop to make the colour work, and a switch of screen resolution the team were ready to huddle around the telly. Looking at the perfect burdown chart (which the backlog tool had kindly automated for us) the team were confident that they were on target with a week to go. They each talked about their issues and kindly offered support. When asking about blockers nothing got raised, and the first successfully completed sprint for a while looked promising, cake was surely inevitable!
Another day passes with the chart aligning perfectly against the average guideline, everything (on the surface) was running smoothly. The QA had plenty of stuff to test and had been passing work back to the developers to resolve. The developers had been fixing, and the healthy ping-pong discussions were catching things before they were considered ‘Done’. But with 2 days to go, a different story emerged. The team were now on their second day of a horizontal and it had been a while since a work item moved into ‘Done’. So moving on and we’re closing the sprint off, and unfortunately, you guessed it… the team missed out on cake again.
Ok, so that’s the background… but what is this all about? Well, it’s mostly around the ways we worked to get these problems solved. But it’s also about the frustration I have around how these tools are designed to show burndown.
The burndown built in to the tool based on the Kanban board used "Decimal Hours" based on Tasks added to stories. I don’t have a problem with a team creating tasks if they really want to (it can be a means to an end at the beginning when your building trust and understanding), but the tool is forcing some behaviours I totally disagree with. *gets soapbox*
What do points make? PRIZES… err, no… Hours?!
For the chart to work, not only did a task have to be created (even if we didn’t want to use tasks!), it also required a decimal hour value. I’ve seen this kind of exercise extend the length of planning as the team agonise over trying to make the hours match to an original point size. At the end of the day, if the teams decide to measure using points, T-Shirt sizes or Gummy bears, it doesn’t take too many sprints to find their estimation isn’t too far off.
I’m really not sure why the board vendors think this is a good thing. For me, making the units an abstract measurement of relative size allows the team flexibility in their estimation, and it also keeps the team away from being held to account on a granular level. They show commitment, in return they enjoy less micromanagement.
Some argued that tasks into decimal hours was a good idea because it meant you didn’t have the ‘lumpy’ burndown chart you’d get if this used story points. My view is that a lumpy deviation (of any kind) is always a good talking point. Either way, this (story) pointless burndown concealed the first issue, poorly sliced stories.
Done… you have been!
The team diligently converted a story into tasks and made sure the decimal hour, in amongst this were the tasks needed to carry out the testing. Everything was in place, the burndown board adjusted itself and the team started their sprint.
They were excellent at keeping the tool up to date, and tasks were put into ‘Done’ as the developers finished off their initial coding for the task. The testing tasks were not yet completed, and here is where we find a problem. The burndown chart could stay completely on track (or ahead) without a single piece of testing being completed! With only one tester, this behaviour was literally hidden by the tool (Hidden of course until near cake time!)
Hours Vs. Points
In the burndown examples above, you can see on the left all is fine and dandy until day 8. On the right… we can start to see on day 3 could start to talk about where we really are. My view is the point of a radiator is to be able to inspect and adapt as we go, and not just retrospectively!
The fix is on…
Like I say, when you’re new into a team it’s a case of taking your time and highlighting things to solve in the retrospective. Here’s some of the things they did to improve their flow…
The team were encouraged to talk about the stories more during sprint and trust each other to do the right thing. As time went on they just knew what needed doing, and noted it in the story if need be (as opposed to adding repetitive tasks to each story). They were relieved they didn’t need to do this anymore as it took about 30-60 minutes to calculate the hours and put them in the tool each sprint!
We used our own simpler physical board to talk about work in the stand up (no more cables, scroll bars, and dodgy colours). This freed up about 5 minutes of the teams standup and was much easier to administrate.
It only dealt with stories, and we stopped talk of “Done, but…” We now had a more open and visible radiator to discuss how the developers can help the QA get work into Done before starting the next story.
Stories bigger than a certain size would not be accepted and the team would work to identify better ways to slice the story.
So over the next few weeks and months we did lots more amazing work such as WIP and queue management, writing upfront tests, automating integration tests and giving the QA permission to spend more time coaching the team to test as opposed to simply running tests. The net result… everyone’s waistline increases as we celebrate successful sprint after sprint.