Just recently, an update to GMail totally threw me when it subtly changed the way it did something that I use many, many times a day. Although this change probably seemed trivial to the team at Google, it really has been a pain in the neck for me.
Changing the user experience for a popular piece of software is a difficult tightrope to walk. Too little change and you stifle progress whilst changing it too much risks loosing or annoying the user base.
So what caught me out? I get a notification of what has changed in my groups as a top level entry in my inbox. I found this useful since I could simply go to the group, see what has changed and read the items that were of interest at that moment. I return to my inbox and the notification would be removed - happy days.
The update has subtly changed this. Now, it seems to want me to read ALL of the items the notification is related to. If I don't then the notification remains at the top of my inbox until I have viewed everything it relates to or I simply dismiss the notification by swiping it to the left.
This is a tiny change in all reality, it could even be a mistake but I find this change in behaviour very annoying since it requires more action from me. I imagined how this might have been recorded:
Given 1 or more groups have been configured
When the user enters their inbox
Then a notifications is displayed for each group that has new mail
And the notifications are at the top of the inbox
And the notifications are ordered descending by the amount of new mail in the group
It has long been my contention that the words we use in scenarios are often a bit sloppy. In this case you could easily interpret this in 2 ways, which revolves around how we interpret a 'new' mail item. Is a mail item new if it has never been opened or when it has been seen via clicking on a notification?
By not really explaining what 'new' means we may have killed existing behaviour by simply misinterpreting what the original intention was. In this case the word 'new' is not clear as it is open to interpretation.
A good practice is to think how this could be interpreted in the future. Would this be exactly the same as now? If there is any doubt, then deliberating over the words used is a good use of time. The emphasis should be on the author to think about this rather than make the assumption that the reader will interpret correctly.
It is worth thinking about what you know and they don't - the conversation for my scenario may well have included what 'new' meant or it could have been well established by that specific team. Since it completely missing from the scenario, we will never know.
As the author we have access to the whole context of the conversation, so the odd word here or there does not appear to matter. If we take a step back and put ourselves in the place of our intended audience we can often see assumptions we have made and save them a lot of time and effort in the future.
Monday, 30 March 2015
Tuesday, 24 March 2015
Retrospective: The 'Perfect' Sprint
This retrospective switches the attention of the team from the sprint they just had and allows them to explore what their 'perfect' sprint would look like. I have used this with teams that had a bad sprint - topics from the sprint will surface irrespective.
We represent the sprint as 4 columns - Prep, Planning, Dev Day, Review. You can choose the columns you want to focus on. The team then write up stickies for their perfect sprint, focusing on a column at a time. You will find that opposites occur at this point e.g. the opposite of what actually happened in this sprint. You can pose questions to get to the root cause and add more stickies if required.
I usually pair people for this, which seems to work better that everyone individually participating. I allow a time box of about 5 minutes and then we quickly go through the points raised. You can group common themes at this point. We repeat this for each column in sequence, following a journey through the sprint.
The key is keep the team focused on the perfect sprint. You can dive into specific issues as required - these will likely be based on the current sprint so they make good topics for discussion. I found this fits into a 60-90min retro with 4 columns at a good pace. If you have less time you can use less columns.
At the end we start to highlight what is in our control. This is usually pretty high - 50-75% seems to be about right. This leads the team to the obvious question - why don't we have our perfect sprint more often, given so much of it is in our control?
This outcome drives which actions we want to take into the next sprint - taking one from each column is a nice idea, but it does depend on what you came up with.
We represent the sprint as 4 columns - Prep, Planning, Dev Day, Review. You can choose the columns you want to focus on. The team then write up stickies for their perfect sprint, focusing on a column at a time. You will find that opposites occur at this point e.g. the opposite of what actually happened in this sprint. You can pose questions to get to the root cause and add more stickies if required.
I usually pair people for this, which seems to work better that everyone individually participating. I allow a time box of about 5 minutes and then we quickly go through the points raised. You can group common themes at this point. We repeat this for each column in sequence, following a journey through the sprint.
The key is keep the team focused on the perfect sprint. You can dive into specific issues as required - these will likely be based on the current sprint so they make good topics for discussion. I found this fits into a 60-90min retro with 4 columns at a good pace. If you have less time you can use less columns.
At the end we start to highlight what is in our control. This is usually pretty high - 50-75% seems to be about right. This leads the team to the obvious question - why don't we have our perfect sprint more often, given so much of it is in our control?
This outcome drives which actions we want to take into the next sprint - taking one from each column is a nice idea, but it does depend on what you came up with.
Monday, 23 March 2015
Retrospective: Ghost's of Retro's Past
This is a retro design to confront teams with actions from retros that have not been put into practice. Without actions there is little point in the retro, right? This is not a format you can run regularly - ideally you should not have to!
Start of by hunting down all the outcomes from the previous 4-5 sprints. Print them off in large print, cut into individual cards, fold them up and place them in a bowl. On a board or wall, draw out 3 columns - Ditch, More and Done. The team now take it in turns to pick an action from the bowl and place it in the column they think the action is relevant to.
Encourage discussion at this point - I played devils advocate often e.g. "I thought that was done?", "Didn't you do that?", "Yeah, we said that 3 months ago...." - you might not be so mean.
If actions are in More they are still relevant so there is no reason why they should not be done. Anything in More represents a missed opportunity for change. It is worth pointing out how awesome these ideas are and that they already came up with them - seems like a bit of waste.
Ditch suggests this is no longer relevant - you might want to explore if this is because they are old or we have evolved as a team. You might want to ask why didn't we need it? What do we know now that we did not know when we came up with this action? How did we ever survive without it?
You can also look at the Done actions and look at why they were Done - are they special/easy... what was different? It might come as a surprise how little has been 'Done'. This can be a valuable learning exercise in itself.
You can now focus the learning on why we did not do these things and then tease out which ones from More should we do immediately. Make sure the team buy into addressing these items in the next sprint. Ask the question "What could we put into practice today?".
Start of by hunting down all the outcomes from the previous 4-5 sprints. Print them off in large print, cut into individual cards, fold them up and place them in a bowl. On a board or wall, draw out 3 columns - Ditch, More and Done. The team now take it in turns to pick an action from the bowl and place it in the column they think the action is relevant to.
Encourage discussion at this point - I played devils advocate often e.g. "I thought that was done?", "Didn't you do that?", "Yeah, we said that 3 months ago...." - you might not be so mean.
If actions are in More they are still relevant so there is no reason why they should not be done. Anything in More represents a missed opportunity for change. It is worth pointing out how awesome these ideas are and that they already came up with them - seems like a bit of waste.
Ditch suggests this is no longer relevant - you might want to explore if this is because they are old or we have evolved as a team. You might want to ask why didn't we need it? What do we know now that we did not know when we came up with this action? How did we ever survive without it?
You can also look at the Done actions and look at why they were Done - are they special/easy... what was different? It might come as a surprise how little has been 'Done'. This can be a valuable learning exercise in itself.
You can now focus the learning on why we did not do these things and then tease out which ones from More should we do immediately. Make sure the team buy into addressing these items in the next sprint. Ask the question "What could we put into practice today?".
Friday, 13 March 2015
Retrospective: Island Voyage
The concept behind this is to encourage the team to replay their sprint and discover key parts of their journey. The linear voyage you take the team on means we focus on the whole sprint rather than just remembering the final few days, which might not be the best to refer to.
You start with drawing an island in the top right of the board. This is the destination and represents what we delivered. Mine has palm trees and a tiki bar - it is a lovely place to be :) You can have fun with the name, mine was 'Delivery Island' which is a bit boring.
You start the retro by asking, which kind of vessel represented this sprint as the icebreaker. You draw this in the bottom left hand corner of the board. You can play with the speed of the boat, how large it was, was everyone on board, did we have space space for more stories, what kind of motor did it have, was everything ship shape - you get the idea. Naming the boat is a nice touch.
You can then spread out the visual cue cards for the team to see. These help them explain events or things they noticed on their journey to the island in the boat they described. I usually give a description of what they are too as this gets peoples creative side kick started.
If it was a not a clean sprint, you can ask the team where they got to. Was it close to the island? One of my teams decided they got to 'Acceptance Bay' but never made it the island itself. Maybe it went terribly and they ran out of fuel halfway?
Encourage them to use the cards as cues to describe their journey to the island, these can represent things we saw, things we avoided or things we did e.g.
During the journey you can experiment with detours - you can ask "did we have to go that way?", "Was this the scenic route - was there a more direct way?". You can also explore short cuts - these represent a better way of doing things based on what you know now - "Can you see a faster way of getting from here to here?". Maybe you the team got lost? If so, how did they get their bearings again?
The journey path can meander and double back on itself if the team had problems. It can suddenly go straight if there was a breakthrough. You can go around features if you avoided them (The Hurricane of Support) or show the path going through them (The Whirlpool of Broken Builds) if it was a rough time. You can stop off at places if you had slow progress and show the paths of other obstacles that you crossed which maybe hindered or halted your progress.
The linear nature of the journey lends itself to story telling and the nautical metaphor is pretty easy to grasp. By telling the story of the sprint, you often cover things that we might have forgotten about otherwise.
You start with drawing an island in the top right of the board. This is the destination and represents what we delivered. Mine has palm trees and a tiki bar - it is a lovely place to be :) You can have fun with the name, mine was 'Delivery Island' which is a bit boring.
You start the retro by asking, which kind of vessel represented this sprint as the icebreaker. You draw this in the bottom left hand corner of the board. You can play with the speed of the boat, how large it was, was everyone on board, did we have space space for more stories, what kind of motor did it have, was everything ship shape - you get the idea. Naming the boat is a nice touch.
You can then spread out the visual cue cards for the team to see. These help them explain events or things they noticed on their journey to the island in the boat they described. I usually give a description of what they are too as this gets peoples creative side kick started.
If it was a not a clean sprint, you can ask the team where they got to. Was it close to the island? One of my teams decided they got to 'Acceptance Bay' but never made it the island itself. Maybe it went terribly and they ran out of fuel halfway?
Encourage them to use the cards as cues to describe their journey to the island, these can represent things we saw, things we avoided or things we did e.g.
- did we avoid any big storms or did bad weather push us off course (hurricane)
- did we know where were going (compass)
- was our time kidnapped or did we have to go out of our way to avoid being kidnapped (pirates)
- is there blood in the water or do we need to be cautious (sharks)
- did we see anything amazing (sunset)
- did anything distract us (dolphins)
- was there somewhere we would have preferred to be (desert island)
- was there the threat of destruction (sea monster)
- what held us back or what stopped us from floating away (anchor)
- what could sink us or what did we manage to avoid (whirlpool)
- did we have anyone overboard (man overboard)
- where we all alone or did we loose our way (dead calm)
The cue cards mean the team don't have to draw anything but if you have a creative bunch, ditch those immediately! If they cannot see a picture that represented what they wanted then they can draw anything they like. You can find pictures easily on Google - I would watch out for mermaids though, they tend to be NSFW (you have been warned).
During the journey you can experiment with detours - you can ask "did we have to go that way?", "Was this the scenic route - was there a more direct way?". You can also explore short cuts - these represent a better way of doing things based on what you know now - "Can you see a faster way of getting from here to here?". Maybe you the team got lost? If so, how did they get their bearings again?
The journey path can meander and double back on itself if the team had problems. It can suddenly go straight if there was a breakthrough. You can go around features if you avoided them (The Hurricane of Support) or show the path going through them (The Whirlpool of Broken Builds) if it was a rough time. You can stop off at places if you had slow progress and show the paths of other obstacles that you crossed which maybe hindered or halted your progress.
The linear nature of the journey lends itself to story telling and the nautical metaphor is pretty easy to grasp. By telling the story of the sprint, you often cover things that we might have forgotten about otherwise.
Monday, 9 February 2015
"3 Strikes" Quality Monitoring
This concept came about through discussion about how Defect waste should be measured. There are a load of posts on the 7 wastes, which I will not go into now but this started a discussion that led to something interesting.
We found 3 different gates which we think could be used to actively monitor the quality of our organisations products and development processes:
1) Defect Waste - This is a measure that is purely within our sprint. If we are working on something new and we find a defect with it in our sprint during QA, we label the time to resolve the issue as defect waste. This may seem harsh but it has interrupted our flow through the development process. We had complete acceptance criteria and access to the domain knowledge that created it - so in an ideal world this should never happen.
This a symptom of other problems e.g. are developers buying into the acceptance criteria, are the criteria strong enough, are the developers even reading the criteria? This also encompasses questions around the effectiveness of regression and automation suites, which is another area the team can improve on.
2) Release Testing - A series of teams all feed into a release, which is where we realise an integrated version of our application. We typically run automated suites and exploratory testing here, ideally this is built from tests created by the teams in addition to system level tests. Anything we discover here is irritating since we now need to fix it before we can release.
I find this hard to monitor since the issues raised look like any other issue and it is difficult to pick them out - ideally we are only interested in stuff raised during testing and dates are the only data I can currently rely on. It symbolises complexities around integrating work from multiple teams and possibly improvements for teams in terms of DoD depending on the problems found.
3) Bug Reports from Customers - The last place a bug will be found is by the customer, which means the organisation has failed! It should not make it this far but that does not mean we cannot learn from it. The number of the issues being raised by the customer can be used as the final gated measure.
It has evaded both of the other gates so may well be down to customer specific data or configuration. It might inform how we guard against regression for specific customers who are using features that might be specific to them or rarely used.
These 3 measures can be used to monitor trends. Ideally, we would see all of these trending towards zero in every sprint, every release and for every customer. The combination of the 3 gates tells us more about about how we treat quality throughout the development process.
By making these measures visible we can also start to re-focus the development teams on a wider view of what we term as quality for the products we build.
We found 3 different gates which we think could be used to actively monitor the quality of our organisations products and development processes:
1) Defect Waste - This is a measure that is purely within our sprint. If we are working on something new and we find a defect with it in our sprint during QA, we label the time to resolve the issue as defect waste. This may seem harsh but it has interrupted our flow through the development process. We had complete acceptance criteria and access to the domain knowledge that created it - so in an ideal world this should never happen.
This a symptom of other problems e.g. are developers buying into the acceptance criteria, are the criteria strong enough, are the developers even reading the criteria? This also encompasses questions around the effectiveness of regression and automation suites, which is another area the team can improve on.
2) Release Testing - A series of teams all feed into a release, which is where we realise an integrated version of our application. We typically run automated suites and exploratory testing here, ideally this is built from tests created by the teams in addition to system level tests. Anything we discover here is irritating since we now need to fix it before we can release.
I find this hard to monitor since the issues raised look like any other issue and it is difficult to pick them out - ideally we are only interested in stuff raised during testing and dates are the only data I can currently rely on. It symbolises complexities around integrating work from multiple teams and possibly improvements for teams in terms of DoD depending on the problems found.
3) Bug Reports from Customers - The last place a bug will be found is by the customer, which means the organisation has failed! It should not make it this far but that does not mean we cannot learn from it. The number of the issues being raised by the customer can be used as the final gated measure.
It has evaded both of the other gates so may well be down to customer specific data or configuration. It might inform how we guard against regression for specific customers who are using features that might be specific to them or rarely used.
These 3 measures can be used to monitor trends. Ideally, we would see all of these trending towards zero in every sprint, every release and for every customer. The combination of the 3 gates tells us more about about how we treat quality throughout the development process.
By making these measures visible we can also start to re-focus the development teams on a wider view of what we term as quality for the products we build.
Thursday, 29 January 2015
Symbols not words
It is rare that something I try just works but this is something that was simple and helped one of my teams.
We were finding it awkward to track the overall state of a story on our manual board. Without this it was easy to miss something interesting during the stand up and even more difficult for the business to look at the board and understand what we were working on.
The team always talked about 'bringing an item into play' so I started to use a CD/DVD play symbol to indicate the items we were currently working on. This quickly evolved to use the stop symbol to show a story had not been started yet, a pause symbol to show the story had something blocking it and eventually a fast forward symbol to indicate that the team were swarming on that single story. Once we used the eject symbol when a story was pulled from the sprint due to a really late change in customer requirements.
It seems almost trivial, but the mere fact we could see the state of all the stories on the board helped with:
1) WIP - before it was too easy to start working on something without finishing something else. There were several instances where we had a high WIP and some items could not be done by the end of the sprint. Being able to see status of the board pretty much stopped this overnight. Never quite understood why but the added visibility meant it was much more difficult to hide opening another story, allowing the team to form and keep an agreement on WIP.
2) Stand ups - we focus the stand up on the 'in play' stories, drilling into each one and organising the day around closing them. We often update the symbols to reflect what we agree to do for the day e.g. pausing a story so we can swarm on another one if we feel it is stuck and we won't incur waste in doing so.
3) Waste - the states of the story give clues to any waste that might be happening during development. If a story is paused - do we have a delay or hand-off? If a story is in play but seems to be stuck in a particular phase of development - maybe we have re-learning happening? If we have to play a paused story - maybe we should be looking for task switching? I can ask questions that help the team see waste and record it more accurately but the clue's start with status of the story.
We have also used padlocks to indicate if something should not be touched - happens occasionally if we are working on anything that needs approval by the organisation but we are slightly ahead of the curve and have already planned the item.
The key observation is that everyone 'gets' these symbols. They convey specific information quickly and easily with no training required. They drive team conversations and most importantly, the team has adopted them which is probably the best endorsement you can get.
We were finding it awkward to track the overall state of a story on our manual board. Without this it was easy to miss something interesting during the stand up and even more difficult for the business to look at the board and understand what we were working on.
The team always talked about 'bringing an item into play' so I started to use a CD/DVD play symbol to indicate the items we were currently working on. This quickly evolved to use the stop symbol to show a story had not been started yet, a pause symbol to show the story had something blocking it and eventually a fast forward symbol to indicate that the team were swarming on that single story. Once we used the eject symbol when a story was pulled from the sprint due to a really late change in customer requirements.
It seems almost trivial, but the mere fact we could see the state of all the stories on the board helped with:
1) WIP - before it was too easy to start working on something without finishing something else. There were several instances where we had a high WIP and some items could not be done by the end of the sprint. Being able to see status of the board pretty much stopped this overnight. Never quite understood why but the added visibility meant it was much more difficult to hide opening another story, allowing the team to form and keep an agreement on WIP.
2) Stand ups - we focus the stand up on the 'in play' stories, drilling into each one and organising the day around closing them. We often update the symbols to reflect what we agree to do for the day e.g. pausing a story so we can swarm on another one if we feel it is stuck and we won't incur waste in doing so.
3) Waste - the states of the story give clues to any waste that might be happening during development. If a story is paused - do we have a delay or hand-off? If a story is in play but seems to be stuck in a particular phase of development - maybe we have re-learning happening? If we have to play a paused story - maybe we should be looking for task switching? I can ask questions that help the team see waste and record it more accurately but the clue's start with status of the story.
We have also used padlocks to indicate if something should not be touched - happens occasionally if we are working on anything that needs approval by the organisation but we are slightly ahead of the curve and have already planned the item.
The key observation is that everyone 'gets' these symbols. They convey specific information quickly and easily with no training required. They drive team conversations and most importantly, the team has adopted them which is probably the best endorsement you can get.
Friday, 21 November 2014
Connecting developers with the customer's problem
The holy grail may be that developers engage with their customers but there are plenty of situations where this can be difficult. Sometimes, it is also unclear who the customer is especially when the product is sold to public institutions - is the institution the customer or the members of the public that it serves?
I found many companies still have barriers between the developers and customers. Although they are often working to break these down, institutional change is usually slow. The more people in the way, the more chance the 'why' is lost and the problem is not addressed.
I wanted to come up with a way to connect the developers with the concerns of the customer irrespective of how removed they were. Collaborating with Dan Rushton, we started to think about conversations that would help make that connection.
All the way through this, I had Melissa Perri's excellent phrase 'Problems not features' ringing through my ears. I wanted to expose the problem we were trying to solve for the customer.
We eventually came up with a concept where we imagined the developer overhearing support calls in a call centre. We saw this as the primary interface for many customers with the companies that provided them with services. We also felt that this would be how a company would get most of it's feedback about it's service.
We thought that overhearing the customer explain the problem they were experiencing would provide excellent context for the developer to better understand why we were building something to solve that problem.
Our first thought was to simply role play a conversation but felt that this would not involve the whole team and could make people feel awkward. We wanted it to revolve around a conversation that was focused but also collaborative.
We then started to think of a storyboard that we could create to convey a conversation. Knowing people can be a bit funny about drawing in groups (unless you work with creative types of course who usually ask for more art materials), we came up with the a neat idea that we called a Situation Card.
The Situation Card is a bounded box and contains a single character in the form of a stick man. A simple conversation needs only 2 types of Situation card - one for the customer and one for the call centre member of staff. More complex conversations can use more types of cards - we created ones for people doing surveys, developers working at their workstation, people on their mobile phone etc.
The character is in one corner and has no face allowing the team to add speech bubbles, thought bubbles, facial expressions etc to convey the content of the conversation.
We found A5 was a nice size. Laminating them means you could re-use them. Here is an example of a Situation Card for a person on a mobile phone:
The conversation is told through a series of Situation cards laid out like in a line or columns like a comic book or story board. You can limit the amount of cards so that the conversation is short and concise - you can also refine or refactor pretty easily since cards can be removed and rewritten easily.
The idea is to tell a story and link the problem the customer has with the feature of your product that solves that problem. In telling the story you can use cues like 'If you could hold the line, I just need to check with my manager' to stimulate conversation in the team and check facts or confirm understanding before continuing the conversation.
The conversation can start off as a complaint and then be refactored to a support call, where there is a happy ending since we have the perfect feature to the solve the customers problem. The process of creating the conversation allows the team to better understand the role of the feature. We also think it could be a good way of exploring if a feature does what we think it does, although we have not tried that yet.
In the experiments we have had so far, we found this useful for connecting the problems of the customer with an unseen bit of infrastructure or service which has no interface as such but still has a part to play in the service being provided.
Still a work in progress, I will post up some examples as soon as I can.
I found many companies still have barriers between the developers and customers. Although they are often working to break these down, institutional change is usually slow. The more people in the way, the more chance the 'why' is lost and the problem is not addressed.
I wanted to come up with a way to connect the developers with the concerns of the customer irrespective of how removed they were. Collaborating with Dan Rushton, we started to think about conversations that would help make that connection.
All the way through this, I had Melissa Perri's excellent phrase 'Problems not features' ringing through my ears. I wanted to expose the problem we were trying to solve for the customer.
We eventually came up with a concept where we imagined the developer overhearing support calls in a call centre. We saw this as the primary interface for many customers with the companies that provided them with services. We also felt that this would be how a company would get most of it's feedback about it's service.
We thought that overhearing the customer explain the problem they were experiencing would provide excellent context for the developer to better understand why we were building something to solve that problem.
Our first thought was to simply role play a conversation but felt that this would not involve the whole team and could make people feel awkward. We wanted it to revolve around a conversation that was focused but also collaborative.
We then started to think of a storyboard that we could create to convey a conversation. Knowing people can be a bit funny about drawing in groups (unless you work with creative types of course who usually ask for more art materials), we came up with the a neat idea that we called a Situation Card.
The Situation Card is a bounded box and contains a single character in the form of a stick man. A simple conversation needs only 2 types of Situation card - one for the customer and one for the call centre member of staff. More complex conversations can use more types of cards - we created ones for people doing surveys, developers working at their workstation, people on their mobile phone etc.
The character is in one corner and has no face allowing the team to add speech bubbles, thought bubbles, facial expressions etc to convey the content of the conversation.
We found A5 was a nice size. Laminating them means you could re-use them. Here is an example of a Situation Card for a person on a mobile phone:
The conversation is told through a series of Situation cards laid out like in a line or columns like a comic book or story board. You can limit the amount of cards so that the conversation is short and concise - you can also refine or refactor pretty easily since cards can be removed and rewritten easily.
The idea is to tell a story and link the problem the customer has with the feature of your product that solves that problem. In telling the story you can use cues like 'If you could hold the line, I just need to check with my manager' to stimulate conversation in the team and check facts or confirm understanding before continuing the conversation.
The conversation can start off as a complaint and then be refactored to a support call, where there is a happy ending since we have the perfect feature to the solve the customers problem. The process of creating the conversation allows the team to better understand the role of the feature. We also think it could be a good way of exploring if a feature does what we think it does, although we have not tried that yet.
In the experiments we have had so far, we found this useful for connecting the problems of the customer with an unseen bit of infrastructure or service which has no interface as such but still has a part to play in the service being provided.
Still a work in progress, I will post up some examples as soon as I can.
Subscribe to:
Posts (Atom)