Showing posts with label estimation. Show all posts
Showing posts with label estimation. Show all posts

Friday, 29 September 2017

Bulk Estimation at Speed

So, today I faced a new problem. The forecasting I have been using for a while has been throwing out some dates that 'feel' very pessimistic. I want to trust them but they look wrong so I find myself with a dilemma of how to test the forecast in a sensible way.

A previous observation was that whilst asking people how big something is gives a wide range of results whilst asking them if something can be done in a set amount of time is easier to answer and is just as easy to use and interpret as a number.

So given I have a backlog of about 50 things I need to test, I welded these 2 things together and came up with the following session that I ran with my own team.

The only prep you need to do for this is to prepare 2 question cards. The first question is based on the ideal time it should take to do a story. At time of writing, my team want every story to be 5 days or less. So the first question card would be "Will it take 5 days or less?". The second question card, is the opposite extreme of that. So for us, it has gone terribly wrong if it will take 2 or more sprints which is what we have on the second card.

There should be a gap between these which is your middle ground. The observant among you will realise these options look like T-Shirt sizes - which they are! We don't ask that question - we ask is it bigger than X or larger than Y giving a wide difference. If it neither then it must be in the middle ground.

So here's how we ran the session:
Print out your backlog with title and maybe a narrative (as a... I want... so that)
Stack them in the same order as your backlog, highest priority first
Grab a selection of developers and include some QA
Put the question cards and middle ground card on the table
For each card and ask the questions
Expect discussion but keep it short:
* Encourage listing assumptions if that would make it easier to answer, document these if you can
* Re-iterate the ranges using days ("So if you started on Monday, it would be finished by Friday")
* Use comparisons ("Remember that piece of work on X that took 2 sprints, is it as difficult as that?")
* Watch out for unknowns which drive up estimates, mark these for later investigation/refinement
* Use timeboxing if you think it makes sense
Stack the card on the relevant answer card

Scoring a story whilst telling the story
I was quite surprised as how fast my team did this - 52 stories in 1hr. We can then visualise the backlog in order and colour code the cards to show us the sizes from the team. You would expect first stories to be all green and then progressively turn orange then red as we head further way from the top of the backlog.

That's not what we found. 

We found a mix across the backlog that we needed to verify. We discovered that this showed what the team did not have a good understanding of, allowing us to focus our time on refining the cards that were big because we were scared or confused by them.

As far as validating the forecasting, we could now model a series of sprints based on how long they might take which looks a little like a gant chart but is throw away so I will allow it. More importantly it allowed us to see how we might reduce some of the risk in our delivery but tackling the unknowns upfront and ensuring we only develop the smallest stories we can, which are far more predictable.

More importantly, the team found this useful as a confirmation of what we would be doing and in what order allowing them to spot things that had been overlooked. They did this by telling a story of what the system would look like as they developed each story. 

They checked the system by also finding places where a demo would make sense and the system would provide end to end functionality that could be seen and understood by stakeholders. We found having a system diagram very useful in this as the developers can point to what they are talking about and everyone understands the context.

They even asked if they could do it again, which is surely the sign of good times :)

Tuesday, 14 March 2017

Bad Story Spotting

Today, I was having a chat with a few others. It was long and rambling but the conclusion was kind of nice. We realised that every single issue we were talking about really started with how we prepare our stories. We often have bad stories that make it through and then we have to deal with the problems in the sprint, which is terrible for everyone concerned.

So we came up with 3 ways of spotting a bad story rather than trying to decide what a good one would look like. All have to pass for this to be a story that we should do. The vision was that the team can 'test' a story before allowing it to be worked on ensuring the team have the final call on responsibility.

1) Does it have any assumptions or risks?

No.... none? Don't believe it. It would be a truly exceptional story that has neither of these. If there are no risks then I would bet that this is either too small or there is no value in the story. Neither are good. The most common reason is that we simply didn't think or come up with any, which is not the same as there not being any.

If there are some assumptions and risks then we should find out if we are happy with them. Can we mitigate the risks we have identified? Are there any experiments we can run to find out some more? Are the risks worth it and we are happy with the trade off (which will probably be longer delivery time)?

Spotting assumptions in a discussion is actually very easy - "Well, IF we did this...", "What about..." or the clanger "Assuming....". Stop, ask for it to be a sentence and write it down!

Risks require a little more effort. We have some questions we can asks "What is likely to take the longest?", "What have we not done before?", "What is only known by a few members of the team?". Using Liz Keogh's scale for estimating complexity is a nice tool too which should help you uncover the risks of key parts of a story.

2) Do we all see the same thing?

I was introduced to a great game in a workshop with Tobias Mayer. You create a character by each person in your group adding one feature at a time. When you can no longer 'see' the character, the next person has to explain how the character makes sense to them - this is to help you see it.

At the start of your story or during planning, you can call out what you need to do for a story in order one person at a time. If anyone does not see the same thing you can challenge by saying that you cannot 'see it', allowing you to spot if things are missing. When we cannot think of anything else to add, we can check that we all 'see it' and move on.

You can also do this using diverge and merge if you task out stories. This where everyone creates tasks and then we merge these together to see where we agree or have different ideas. Both are valuable - similarities suggest something we should definitely do whilst singular tasks merit conversation since we might have missed something.

If we don't see the same thing, we need to spend a bit more time making sure we do. Doing this with the people who will actually develop the story is absolutely required. Extra points if you are using the 3 Amigo's for this conversation.

3) Will this take longer than...

Yes, estimates suck. This isn't quite that. You pick a line in the sand and you decide that stories should not take longer. This is your stories delivered and everything that entails. For my team, at time of writing we are trying to get everything into production in 5 days or less. We are miles off but that is our line in the sand.

To reach the hallowed land, our stories need to be smaller and the line in the sand helps us recognise stories that do not fit the goal.

You need to take constraints into account. We know our path to live is not a super smooth highway so we take that into account and ask, "Will I be done with development in 3 days?". If not, then we need to break this story down, it is too big.

You can also use the awesome "No busllsh*t" cards from Lunar Logic, which approaches this from a different angle.