Thursday, 23 August 2018

Lessons from my favourite angle grinder

I am a DIY nut. I do a lot of jobs around my house and there are some tools which are indispensable.

So when my Makita Angle Grinder stopped working, a little part of me died. The switch had not felt 'right' for a while so I suspected that it had finally given up.

My beaten, battered, Makita GA9020
It is the single most used tool at the moment since I am building patios, walls and outside kitchens. All of this involves stone and brick, which always needs to be cut at some point. Usually many points.

This tool was particularly expensive. Makita tools generally are. I could have bought a much cheaper version from another manufacturer. This was a significant capital investment for me at the time. I spent more since I had previously bought cheaper tools which had then failed.

I hate throwing things away. And I still need the tool and now I have to buy another one! If I buy another cheap one then the same will likely happen since I will be doing similar if not more complex or heavy work. I can always buy a better (more expensive tool) but my total investment will be higher than it could have been.

It is a popular belief that anything can be repaired. This is not true. Many things are not designed to be repaired and the cost of repairing them would be equivalent to replacing the item with a new one. Cheap angle grinders are like this. If you can find the parts, which is hard in itself, they won't be cheap. Something like the switch might be up to a third of the price of the device. LCD TV's are another good example - replacing a panel can easily be the same or more than the cost of the TV in the first place.

There is also a noticeable difference in build quality. My Makita feels solid and weighs a tonne. It is intended to last and perform it's task even in harsh building environments. Most trades people I know wont turn up with shoddy tools. If one fails, you have lost a days pay for starters. If they do go wrong, you want to have them up and running ASAP for minimal expense.

My angle grinder did have a problem with the switch. I found a new one for under £20 on ebay in about 10 minutes. Replacing it was pretty painless - 4 screws, take off case, swap power, swap motor leads, transfer fuse unit - it took about 15 minutes and most of that was just prising off the case.

I also noted that the brushes can be replaced and the motor be taken out for servicing. The Makita is designed to be fixed and repaired, it was a part of the design. The most likely parts are trivial to replace and required only a screwdriver. It even comes with a guide on how to perform routine maintenance.

As it is designed to be repaired, parts are easy to come by. I could have got my switch for about 30% cheaper if I chose to wait for a generic part from Estonia. We see the same thing happen with car parts - if you are unlucky enough to own a car that was not popular then generic parts might not be made for it, meaning you have to buy the more expensive genuine parts from the manufacturer.

So how does this apply to software?

Good software design should not need a specialist or the original developer to fix or maintain it. Anybody should be able to figure out what to do if we need to play with it.

Investing time to make sure something can be fixed or changed easily will pay up in the future, not now. Identify where that investment is needed most - acknowledge that some components or services wont't change that often.

Making it complex does not help the next person. Making it as simple as you can is more difficult than it seems.

Understand your investment. If this were to stop working how much would it cost you? If you need it to keep working, invest in making sure it can be maintained, diagnosed and fixed.

If you are building just enough to get the job done, understand that more investment will be required in the future if you still want that service.

If you do decide to cut corners, it will end up costing you more. Understand how your whole product cycle works - from cradle to grave - so you can make the right decisions.

To help people out we can produce simple documentation that helps the next person. It does not have to contain everything but should include what we have already thought about.

Think about the components or services you are using and try to use ones which are commodities already, meaning they are well supported and understood.

Understand your environment and build appropriate for that. Building something for 100 users will not be the same as something for 100,000 users.

Routinely check your solution and make sure it is fit for purpose. Don't wait until it goes wrong to fix it.

Tuesday, 7 August 2018

Vote with your fists

If there was a single tool I use more than any other with teams it is Fist of Five voting.

If a session with a team does not feel right, at the end I ask the group to provide some feedback. I create a scale from 1 (or 0 if you like) to 5 and give a description for the highest and lowest. I make these up every time, something like "Where 1 is 'please never ask me to do this again' and 5 is 'can we do this every day, it was so much fun'" seems to work well.

You can vary the description to fit what you are looking for. You could choose to describe the scale in terms of effectiveness or return on investment, for example.

Every one then gives a score using their fingers, allowing us to get some insight what people are thinking.

I follow up by asking anyone with a score of 3 or less to suggest one thing we can change that would improve their score. As a group, we quickly decide which ones we will try and then we call the meeting to a close.

This is such a simple mechanism but it works for a couple of reasons:

* The feedback we get from the group is at the same time as the problem they observed, making it easier to act on
* Changes are often small so they are easier to implement in the next meeting
* We encourage group ownership of our ceremonies and meetings which helps people engage and take responsibility for their success

An obvious place to try to this is in you standup. If it feels wrong, this would well get some instant feedback that you can put into practice the next day.

Monday, 16 July 2018

But.... Where do I start?

I have been with a few teams now and I was reflecting on how I deal with each transition. I also paused to think how the teams I work with feel too, given we are both in a new unfamiliar situation.

So what have I learned?

1) Pause and think about where your team has been (and what they have seen)

Joining a new team, I am always interested in what they are doing. I think we should be more interested why they are doing it.

We often use the word journey and that's what I'm interested in more than the outcome. If we take time to understand how the team got to where they are, we can often understand more about what drives them, what scares them and how we might be able to help.

One team had a whole load of history which resulted in some seemingly odd behaviors. It all made sense once you understood their journey. This cannot be told by any single person - I heard several versions from several people and somewhere in there was what really happened.

Being sensitive to what the team has been through has been a key learning point for me. It has helped me tailor my own behaviors, language and coaching to get better results from the groups and individuals I work with.

2) Assume the best at all times, especially about people

Despite everything I can see and observe, I have to assume that people are doing the best they can, given the situation they are in and what they know. This is liberally taken from the Prime Directive, which is often used as a kick off to retrospectives.

To me, this applies at all times. It should be our go to place, even with people and teams we have only just started working with.

In one interview, we were doing an exercise where we show a board and ask the candidate what they can see and what questions they would ask of the team. There was an obvious issue where the same avatar was on 3 cards in the development column. The candidate went to great pains to say what the developer is doing wrong and it was not helping other problems they could see on the board.

They never once thought about why the person was doing that and that maybe they are doing things for the right reasons, given their own situation.

What if they were a contractor who was really worried about their renewal and wanted to show how productive they were? What if they had to pick up extra work because someone was on holiday and their stories had not been completed? What if the person really was working on these 3 things by putting in a load of extra hours because they were trying not to let their team down?

3) Make it all visible. Even if it doesn't look good.

At first things often look OK. It's only with transparency do we start to see the problems. Issues are often hidden away and need a bit of coaxing out so we can see the causes.

This takes some guts as people might not like what they see. It is the start of how we adapt ourselves and our processes - without being able to see the problem, you cannot start to fix it.

Transparency not only shows this to the team but also to the world outside the team. This is both a blessing and curse since you may have to deal with attention that you would prefer not to have. In my experience, the benefits definitely outweigh the problems.

4) It not about 'the' process it's about 'a' process

I like scrum. I also like kanban. Some teams need one, some teams need the other. Some teams need something else. Sometimes we need to start with 'something' so we can start to own it.

If a process is intended to evolve, when does it cease to be what it started out as? What makes Scrum, Scrum or Kanban, Kanban? If we embrace being able to adapt, our process will change as we solve problems and find new ones.

The right process is the one that helps the team build software in the best way for them. Often this is dealing with the situation they are in and the problems they face internally as well as externally. It changes over time as our situation changes.

Key to this is encouraging the team to own the process, to be invested in it. For me, a sign of a mature team is owning actions from retrospectives with the same responsibility as they have for building quality software. They are invested in both equally because combined they allow them to achieve their goal. This is built slowly over time with enthusiasm, retrospectives and responsibility.

Resist the urge to replace what the team have. Work with what you have and remember point 1.

5) Give the gift of consistency

In my experience, most things have already been tried by teams who have been around for a while.

Just because something did not work in the past does not mean it will never work. It might be that the time was not right. It is more likely it was not given a chance.

The difference between trying something and using something is consistency. You need to consistently do something for a while until it becomes habit.

These can look like rules and my goal is that these are owned by the team not mandated by myself. You know they have become habits when individuals would defend them if they were taken away.

Being consistent about applying something new is the enabler that allows this to happen. I was pretty terrible at this but I have seen the benefits of being rigorous in applying something new, so I had to learn how to do it. You know you are getting somewhere when others uphold the consistency too.

Wednesday, 27 June 2018

Stream or Team?

I have been working in a scaled environment for a while and the addition of new teams is a regular occurrence.

Recently I have been seeing that what we call a team is actually a stream. In this context a stream is a priority of work that needs to be done in parallel with another priority of work.

Here are some tests myself and one team came up with to sanity check a new team based on our previous experience.

It's a new team if:

1) The team own their code base and can make technical decisions without upsetting, involving or discussing with anyone else

2) There is a backlog of work and the size of the domain ensures the team will have work for the foreseeable future

3) The team can deploy whenever they need to without needing to plan or consult with anyone else

So let's go through some of the learnings that led us to these statements.

The main part of this is around autonomy and responsibility. Picture a team that realises a significant change to the way they branch their code would solve problems they are having. The empowerment we want to give is that they can act on this insight and change whatever they need to change to make them more effective. It's good for them and for business since they waste less time.

Imagine now that they have to validate this change with some others. Worse they have to persuade them that this will help them too. Decisions by the team need to be backed with the autonomy to make those changes as well as accepting the responsibility for doing so.

If it doesn't work out it only affects the people who decided it and they hold themselves accountable for the decision. This is why autonomy and responsibility are twins - one makes little sense without the other.

A repeating thing I see is the call for feature teams to be spun up to focus on a specific deliverable. This often ignores the longer term effects of this decision, namely who will support this new feature once it has been delivered into production. In my opinion, this requirement is best handled by the team who created it to avoid hand offs between support or ops team that might be present in the business.

Longer term side effects could also see knowledge about the feature lost as the team is dispersed and the feature is no longer actively developed. Different strategies need to be used in terms of documentation and testing as we need to ensure we preserve the feature, do not regress it and are aware it even exists. These all problems get worse with time - the longer we don't work on something, the more it drifts in the realm of fear and 'legacy'.

Ensuring work can easily be deployed into production by a team is fast becoming a standard in fast moving organisations. Allowing teams to do this whenever they need to is a key enabler in them producing high quality software with lower risk. Inferred in this ability to deploy is the ownership of the environments that make up a teams path to live.

Any sort of sharing or gating of systems that help a team get feedback on the quality of their software is counter productive. The team need to own these too, allowing them to change their ideas and strategies in line with the problems they need to overcome. Some gates may be necessary, such as change control or regulatory requirements but they can always be adapted and tuned to help developers as much as possible.

Teams owning their area of the world and knowing there is a vision for them is a powerful thing. It helps us create a sense of purpose and belonging, along with all the disciplines we value in building and keeping this running. Forming a team around a transient feature is not the same, it feels 'different' and can miss the essential sense of ownership and responsibility that benefits the business.

Making sure the area the team work in is actually big enough is key here. Too small and any hope of keeping people challenged is going to be hard. Making it too large will also making it harder to ensure a uniform understanding across the team. Knowledge silo's form easily in larger teams and the effects are subtle. It can go unnoticed that a specific individual is an enabler for others since they are needed to start or complete specific types of work.

Following on with that thought, the architecture of what you are designing will enable or block teams from being able to form. It might not be possible to simply carve up an existing architecture and assign different parts to different teams. There are often shared components or services which do not sit neatly in your new boundaries. There is a reason why discrete, contained microservices have become more an more popular recently....

There are other strategies you can employ but they all have varying pro's and con's e.g. component ownership seems like a good idea until you cannot balance keeping the team supplied with work and building things the business wants - you cannot guarantee that every component has an equal share of new work. Making sure a team has valuable work to do is among the most basic requirements for a team, so having a team structure that does not make this easy does not make sense.

I use these tests whenever there is a requirement to add more people. There is a sweet spot for the number of people in a team but also the number of teams based on your situation. These reflect my own experiences and I'm sure there are stories that conflict. I would love to hear them - how do these tests sit with your own experience?

Friday, 22 June 2018

Retrospective: Health Check Retro

Across the organisation I work with, we do a quarterly health check which is very much stolen from the excellent work Spotify did way back in 2014.

One of the problems our community of practice brought up was follow up by the teams themselves. We did this which gave the organisation this fantastic view of how we feel about the teams we work in but the teams never used the same information to improve. Odd right?

I was guilty of this and so I decided to have a retro to focus on improvements the team wanted to make before the next health check.

The setup for this retro was to get the team to vote on the areas which they wanted to see the most improvement in. This was a really quick dot voting exercise at the end of a stand up.

In the retro itself, these are the focus areas. We kick off by asking the team to list the problems they see in each of these areas. I like time boxes and gave them a whopping 7 minutes to pull these thoughts out into a flurry of post its.

I now pick on someone in the team to group the post it so we can see some themes. This is often the person who has used their phone the most or failing that a BA (since they usually have a knack for seeing some groups).

Next, we focus on just a few problem areas by dot voting. Getting them to list the problems means I can now complete the setup and ask them to find solutions for the problems we came up with, again giving them an aggressive time box to work to.

We now go through everything we have come up with, clarifying anything that is abstract (there are always a few) and asking some questions to get people thinking about what they are trying to solve:

How does this help solve the problem?
How is this related to the problem?
If you did this, what do you hope to fix?
Will this fix the whole problem?
What else might be have to do to fix the problem?

This bit is to clarify what everyone has come up with, which is important since we are going to ask people to own these.

The last part of this is the ask people to come up and choose 2 solutions. One that they put into practice this iteration and another one which is longer term. If people look like they lack enthusiasm, point out that the first people get to cherry pick the best things.... that usually helps.

They each read out what they chose for this iteration and talk about what they intend to do. We can keep on top of these in stand ups, asking what help we need to give to keep up the momentum.

Their homework is to think about how they will bring their other longer term task into action and what help they will need, which we will discuss in the next retro.

Thursday, 21 June 2018

What else can you get from source control?

A while back I presented at a couple of conferences with my good friend Helen Meek on the subject of feedback in organisations and teams.

We created a process you can do in your own organisation to help you score feedback mechanisms in a range of dimensions, allowing you to discover ones which are relevant to your organisation.

The site we created for this is still around, if you would like to have a look. We updated the site with the outputs from each of the sessions, giving an aggregated view of about a hundred people rather than just ours:

Some lucky people even got a set of cards, allowing you to quickly choose ones to look into using a few different games. Our inspiration for the format was 'Top Trumps', a card based game from our misspent youth.

We did this because we wanted to open people's eyes to the huge amount of feedback mechanisms we have in our organisations and how few of them we actually use to find, maintain and inspire improvement.

These are some ways of using your source control to help your teams see some new things, depending on what you want to do:

Changing Branching Strategy

Git has made it super easy to branch and merge. The downside of this is we can often live in a branch for too long, delaying intergation that should be verified by running our automated tests. There is a cost for CI and it is usually only run on key branches - main and/or development depending on your strategy.

Moving to trunk based development is something my team are currently working on. There is a lot of heavy lifting in our build pipelines that we need to do but there are also more subtle changes in the way we develop, which I think will take a longer time. The question I had was: how do we know we are getting better at this?

 In this instance we can mine source control to show us data that we should use to help bring about this change:

How many branches have we got?
Are we nesting branches?
How long do the branches live for?
Are multiple people committing to branches?
How many commits are we doing per day?

The habits we have are often the hardest to change. Using these bits of data we can have a conversation about what is hold us back, maybe even what is scaring us from changing.

Test Coverage Strategy

Test coverage is a thorny subject. Tooling can give a skewed view of the world, so it should be used only as a guide. We should rely on developers assessing coverage using a range of techniques to build a more rounded picture of test coverage and where it is needed.

My observation is we rarely use our source control to help us decide on our test coverage strategy. At a basic level, we can draw a picture of how often different areas of the repository are changing. I would expect our need for comprehensive test coverage to be greater in areas of the code base that are changing frequently, helping us get feedback on if we have broken something.

If an area is not changing - there are always things that 'just work' - we should factor this into our strategy. In terms of return on investment, they don't have nearly as much impact as areas that are changing frequently and should be treated differently.

Blind Commits

In an ideal world, every code change should be linked to a story which describes the value and intent behind it.

We don't live there.

Most commits have some sort of link to a story. Maybe take some time to find the ones that didn't and find out why. These are invisible changes to your systems. Without a story what were the acceptance criteria? How were they tested? How were they prioritised?

Source control is a rich source of information, if you have a little imagination you find all sorts of things that identify problems or highlight possible improvements.

Tuesday, 19 June 2018

Accelerating product discovery using experiences

I am a DIY fan. I like building things and over the last year I have had to try a load of new things since I was low on cash but had time to spare.

Recently, I have been doing some tiling and found working out where to start tiling a wall actually quite difficult. There are indeed 'apps for that' I had a look around and nothing really did what I wanted so I thought I would build something for fun.

Instead of pile straight into some code I stopped and took some of my own advice. I recently wrote a little about using experiences to help discover product features, so this seems like a good opportunity to show how this works.

So after about 7 minutes of 'work' I came up with this comic strip which is explains my experience of tiling a wall in my kitchen:

This small dialogue describes the experience I had. I call this a negative or problem story - it explains the problems I had and gives some context of why.

As product developers we can dig a little into this dialog and find extra detail in the conversation. In a problem story, this is all about what has happened so we can expect to find some observations and assumptions. The difference between the 2 is simple - one is observable, something we know is happening and the other is something we think might be happening.

For each cell of the comic strip we quickly extract some key words or maybe phrases (1 word definitely, may be 2):

Cell 1: pride, amateur, mistake, research
Cell 2: ignorance, questions, previous work
Cell 3: lesson, impact, planning, difficult
Cell 4: surprise, assumption, simplistic

Next I would create some statements for each cell which would describe our keywords in a little more detail. Some statements might cover several keywords and that's fine. I also categorise what I find as an assumption or observation:

Cell 1
"People like to take pride in their work" - Assumption
"Some jobs take a considerable amount of effort" - Observation
"People look up things if they don't know how to do them" - Assumption
"People want to learn from our previous mistakes" - Assumption

Cell 2
"Some people might not know what a good job looks like" - Assumption
"Given different jobs some people will still not be able spot the problems" - Assumption

Cell 3
"Starting correctly helps ensure the job goes well" - Assumption
"Specific problems can be avoided through forward planning" - Observation
"Knowing what to look for is not obvious to everyone" - Assumption
"Even when you know what to do, it might not be easy to do in practice" - Observation

Cell 4
"People might think they know what to do when they don't" - Assumption

I am doing this solo and I would recommend doing this in a group so there is a conversation around these experiences. By doing this by myself I am subject to my own biases but you should get an idea of how this works.

As someone building a product, a probably spending a while doing so, I am particularly interested in the assumptions I have listed. I can zero in on the one that bothers me most and tackle just that or I could list them out in order and tackle all of them. At this point it's all about visibility - given what we know now, is there anything we should test before we proceed?

As someone thinking about making something to solve a specific problem, a couple of these are troubling:

"People look up things if they don't know how to do them"
If people don't they will never know there is something that might help them! I would want to be pretty sure that people will research about how to tile rather than just doing it. If they won't seek out information they will never find something that could help them.

"People might think they know what to do when they don't"
Similar to above. Often called "unknowns, unknowns" these are things have complete ignorance about and would not think of getting outside knowledge about.

"People want to learn from our previous mistakes"
If people don't want to then any amount of information that might help them wont make any difference. They won't find out how to do it properly because they simply don't want to!

As product team just starting I would look at these as elements of risk. By proceeding without testing these assumptions, we risk our product not being fit for purpose. If we think the risk is small enough - or we feel confident about our market experience etc etc - we could proceed and convert these into risks.

The observations might help us target anything we build at our target audience a little better:

"Some jobs take a considerable amount of effort"
Helping people to not waste time or materials doing the wrong thing could help us sell our product.

"Specific problems can be avoided through forward planning"
Offering something that helps avoid common issues could also help us sell our product. "Canning" expertise and providing this knowledge in an easy to use format is something people find useful.

"Even when you know what to do, it might not be easy to do in practice"
There are some things that are just difficult. If we can solve that problem, we have something that people might want to use.

I have done this exercise with several groups of people and I am always surprised by the insight that is generated by this - we always find something interesting. It's also very fast - doing this including writing this blog has taken about 45 minutes.

You can scale this with larger groups by using a technique called "diverge and merge". Multiple, smaller groups do the same exercise and then you merge these together. Similar comic strips represent similar thinking, which is valuable since everyone is thinking the same thing. Divergent comic strips give allow us to explore our scope - we can ask if these are valid and maybe spend some time on them or we can ignore them depending on what they represent.

So far we have explored the problem, how about the solution. Comic strip conversations can be used for that to.

This time we put ourselves in the future and describe the experience we want our customers to have once we have built our product.

As I mentioned in my previous blog, asking people to imagine what they would want to hear people talking about is a great way of thinking about the experience we want to create for our customers. In NPR speak - "What would turn our customers into advocates?"

This time, we have an additional thing we can extract from the conversation: features.

Features are things we need to build in order to bring about this experience. We still have observations and assumptions but the assumptions are slightly different - these could be assumptions that are based on us building our features. So assumptions could be present now or they could be assumptions that we will only realise once we have built our feature.

Since we have already done this in some detail, I will call out the features only which are all in Cell 4:

"There is an app that someone can use"
"We can calculate the number of tiles you will need for a specific job"
"We can predict the best place to start tiling"
"Different size tiles mean different calculations"
"Different tiling patterns mean different calculations"
"Common planning problems are avoided"

All of these are features that support the experience we are describing. There could well be more but by just looking at the experience we want to create, we can focus on what will directly support or generate it.

Done in a larger group you will end up with several types of experience. You can then order these however you wish, allowing you to focus on the ones that create the impact you want to have with your customers.

As a product team, look at what we have to kick our project with:

  • Assumptions we might want to test or convert into risks if we decide we want to continue
  • Observations that support our product idea and help us find customers that will benefit from it
  • Features that generate experiences that we want to create for our customers

Next steps are up to you but you could take the scope from this and go straight into a story mapping session. The advantage of focusing on the experience means that the scope you have will directly contribute to what you need to build for your customer.

Did I mention it's fast?

I would love to hear your experience of using this. I have documented the method I have been using and provided templates for creating your own comic strips which you can download from here. Let me know how it goes.