Friday, 16 November 2018

Technical Debts and Loans

Yesterday I had one of those fantastic conversations where suddenly ideas crystallise and take a new unexpected form.

Talking with one of my team about technical debt we were musing on if that was the right word. There are so many ways technical debt could be created we were wondering if having a single word was helpful.

For example we may have done something that was the absolutely the right thing to do at the time only for us to learn a better way of doing it in the future. We have debt to clear up but it was not done on purpose.

In another instance we may make a decision to do something in a less that ideal way to enable something else. A common example could be to meet a deadline or recoup some lost time.

This second one, my esteemed colleague suggested sounds more like a loan than a debt. We have traded something for something else we need right now and have the expectation that it will be paid back at some point.

This trade off is a decision and these are difficult to keep track of. Someone else might not have the context of this decision - with the code alone, it just looks like debt.

Let's take that loan metaphor a little further....

If we were going to take out a loan we would definitely have a record of it. It would contain a term and conditions along with an agreement of how much this will cost us at the end of the term.

We would also be aware that this would cost us more than it would have. We have traded getting it now for a higher cost, which we have decided is beneficial to us in the short term. This ensures we are happy with this cost of this service.

We would agree payment terms too, upfront, so we all know when the loan will be repaid in full.

The amount this will cost us depends on several factors which are linked to risk. Where the risk of non-repayment is low the cost of the loan is low and we have more flexibility on the length of the term. 

Where the risk is high, the cost of the loan increases and term typically shortens.

The purpose of the loan is also a factor. An investment which can cover the risk, like a house will typically lower the risk, whist something like a car which depreciates quickly will increase it.

Finally the person taking out the loan is considered. In the UK, we have a credit score system which scores your risk as an individual - where your track record on loans and repayments is taken into account. 

If you have a habit of not paying loans back, you can be sure you won't be considered a good risk for future loans.

At the extreme, where loans have not be paid after several requests and warnings, collectors will be employed to forcibly recoup the outstanding amount along with additional fees to cover the hassle.

So, let's apply this to some software development!

We have the option of delivering something a bit faster if we trade off some technical area. For the moment, let's assume our stakeholder has a good line of credit so we offer a 'technical loan agreement'.

We outline what we are trading off and what the implications will be in the future. We decide on the risk of this and let that inform the term of the repayment. This term is the maximum amount of time the loan can be left unpaid based on the risk it represents to us as a development team.

We all agree this is the right thing to do and we store the loan agreement as a document which is included in the source for the project concerned. It acts as a permanent record of that decision.

When it comes to prioritisation of work, the team will expect a slice of the throughput to address the debts, which is how we make the repayment. These are refined along with all other stories and we can use forecasting to make sure the delivery of them is inline with the terms of the loan agreement.

When the loan is replayed in full, the document is removed from source but the history can still be relied on if we need it in the future.

If multiple loans are being repayed, there may come a point where this becomes unaffordable for the stakeholder - the repayments for all the outstanding loans exceeds the number of stories we are capable of delivering. The development team can refuse to give a loan until the situation improves e.g. the stakeholder could pay off the outstanding loans in full by giving all the available stories to the team.

Where the terms of a loan are not met, we have some options. 

In cases where not paying loans becomes a habit, our stakeholders credit rating would also be impacted. We could only extend loans to low risk decisions limiting the options for our stakeholders. We might even stop loans being offered entirely until the situation improves. The stakeholder may have to rebuild their credit rating with us before they get what they want.

If we have to forcibly collect on a loan (after having asked nicely, many times), we would take stories away from our stakeholders until the debt was paid in full, slowing delivery and probably causing some pain in the process. This could also effect our view of the stakeholders credit rating since we had to intervene.

This may seem playful but it increases visibility of these decisions and gives you feedback which can help build better behaviours across the product and technical teams. 

I have written on this subject before, check out "Debts and Credits in your backlog".

Thursday, 23 August 2018

Lessons from my favourite angle grinder

I am a DIY nut. I do a lot of jobs around my house and there are some tools which are indispensable.

So when my Makita Angle Grinder stopped working, a little part of me died. The switch had not felt 'right' for a while so I suspected that it had finally given up.

My beaten, battered, Makita GA9020
It is the single most used tool at the moment since I am building patios, walls and outside kitchens. All of this involves stone and brick, which always needs to be cut at some point. Usually many points.

This tool was particularly expensive. Makita tools generally are. I could have bought a much cheaper version from another manufacturer. This was a significant capital investment for me at the time. I spent more since I had previously bought cheaper tools which had then failed.

I hate throwing things away. And I still need the tool and now I have to buy another one! If I buy another cheap one then the same will likely happen since I will be doing similar if not more complex or heavy work. I can always buy a better (more expensive tool) but my total investment will be higher than it could have been.

It is a popular belief that anything can be repaired. This is not true. Many things are not designed to be repaired and the cost of repairing them would be equivalent to replacing the item with a new one. Cheap angle grinders are like this. If you can find the parts, which is hard in itself, they won't be cheap. Something like the switch might be up to a third of the price of the device. LCD TV's are another good example - replacing a panel can easily be the same or more than the cost of the TV in the first place.

There is also a noticeable difference in build quality. My Makita feels solid and weighs a tonne. It is intended to last and perform it's task even in harsh building environments. Most trades people I know wont turn up with shoddy tools. If one fails, you have lost a days pay for starters. If they do go wrong, you want to have them up and running ASAP for minimal expense.

My angle grinder did have a problem with the switch. I found a new one for under £20 on ebay in about 10 minutes. Replacing it was pretty painless - 4 screws, take off case, swap power, swap motor leads, transfer fuse unit - it took about 15 minutes and most of that was just prising off the case.

I also noted that the brushes can be replaced and the motor be taken out for servicing. The Makita is designed to be fixed and repaired, it was a part of the design. The most likely parts are trivial to replace and required only a screwdriver. It even comes with a guide on how to perform routine maintenance.

As it is designed to be repaired, parts are easy to come by. I could have got my switch for about 30% cheaper if I chose to wait for a generic part from Estonia. We see the same thing happen with car parts - if you are unlucky enough to own a car that was not popular then generic parts might not be made for it, meaning you have to buy the more expensive genuine parts from the manufacturer.

So how does this apply to software?

Good software design should not need a specialist or the original developer to fix or maintain it. Anybody should be able to figure out what to do if we need to play with it.

Investing time to make sure something can be fixed or changed easily will pay up in the future, not now. Identify where that investment is needed most - acknowledge that some components or services wont't change that often.

Making it complex does not help the next person. Making it as simple as you can is more difficult than it seems.

Understand your investment. If this were to stop working how much would it cost you? If you need it to keep working, invest in making sure it can be maintained, diagnosed and fixed.

If you are building just enough to get the job done, understand that more investment will be required in the future if you still want that service.

If you do decide to cut corners, it will end up costing you more. Understand how your whole product cycle works - from cradle to grave - so you can make the right decisions.

To help people out we can produce simple documentation that helps the next person. It does not have to contain everything but should include what we have already thought about.

Think about the components or services you are using and try to use ones which are commodities already, meaning they are well supported and understood.

Understand your environment and build appropriate for that. Building something for 100 users will not be the same as something for 100,000 users.

Routinely check your solution and make sure it is fit for purpose. Don't wait until it goes wrong to fix it.

Tuesday, 7 August 2018

Vote with your fists

If there was a single tool I use more than any other with teams it is Fist of Five voting.

If a session with a team does not feel right, at the end I ask the group to provide some feedback. I create a scale from 1 (or 0 if you like) to 5 and give a description for the highest and lowest. I make these up every time, something like "Where 1 is 'please never ask me to do this again' and 5 is 'can we do this every day, it was so much fun'" seems to work well.

You can vary the description to fit what you are looking for. You could choose to describe the scale in terms of effectiveness or return on investment, for example.

Every one then gives a score using their fingers, allowing us to get some insight what people are thinking.

I follow up by asking anyone with a score of 3 or less to suggest one thing we can change that would improve their score. As a group, we quickly decide which ones we will try and then we call the meeting to a close.

This is such a simple mechanism but it works for a couple of reasons:

* The feedback we get from the group is at the same time as the problem they observed, making it easier to act on
* Changes are often small so they are easier to implement in the next meeting
* We encourage group ownership of our ceremonies and meetings which helps people engage and take responsibility for their success

An obvious place to try to this is in you standup. If it feels wrong, this would well get some instant feedback that you can put into practice the next day.

Monday, 16 July 2018

But.... Where do I start?

I have been with a few teams now and I was reflecting on how I deal with each transition. I also paused to think how the teams I work with feel too, given we are both in a new unfamiliar situation.

So what have I learned?

1) Pause and think about where your team has been (and what they have seen)

Joining a new team, I am always interested in what they are doing. I think we should be more interested why they are doing it.

We often use the word journey and that's what I'm interested in more than the outcome. If we take time to understand how the team got to where they are, we can often understand more about what drives them, what scares them and how we might be able to help.

One team had a whole load of history which resulted in some seemingly odd behaviors. It all made sense once you understood their journey. This cannot be told by any single person - I heard several versions from several people and somewhere in there was what really happened.

Being sensitive to what the team has been through has been a key learning point for me. It has helped me tailor my own behaviors, language and coaching to get better results from the groups and individuals I work with.

2) Assume the best at all times, especially about people

Despite everything I can see and observe, I have to assume that people are doing the best they can, given the situation they are in and what they know. This is liberally taken from the Prime Directive, which is often used as a kick off to retrospectives.

To me, this applies at all times. It should be our go to place, even with people and teams we have only just started working with.

In one interview, we were doing an exercise where we show a board and ask the candidate what they can see and what questions they would ask of the team. There was an obvious issue where the same avatar was on 3 cards in the development column. The candidate went to great pains to say what the developer is doing wrong and it was not helping other problems they could see on the board.

They never once thought about why the person was doing that and that maybe they are doing things for the right reasons, given their own situation.

What if they were a contractor who was really worried about their renewal and wanted to show how productive they were? What if they had to pick up extra work because someone was on holiday and their stories had not been completed? What if the person really was working on these 3 things by putting in a load of extra hours because they were trying not to let their team down?

3) Make it all visible. Even if it doesn't look good.

At first things often look OK. It's only with transparency do we start to see the problems. Issues are often hidden away and need a bit of coaxing out so we can see the causes.

This takes some guts as people might not like what they see. It is the start of how we adapt ourselves and our processes - without being able to see the problem, you cannot start to fix it.

Transparency not only shows this to the team but also to the world outside the team. This is both a blessing and curse since you may have to deal with attention that you would prefer not to have. In my experience, the benefits definitely outweigh the problems.

4) It not about 'the' process it's about 'a' process

I like scrum. I also like kanban. Some teams need one, some teams need the other. Some teams need something else. Sometimes we need to start with 'something' so we can start to own it.

If a process is intended to evolve, when does it cease to be what it started out as? What makes Scrum, Scrum or Kanban, Kanban? If we embrace being able to adapt, our process will change as we solve problems and find new ones.

The right process is the one that helps the team build software in the best way for them. Often this is dealing with the situation they are in and the problems they face internally as well as externally. It changes over time as our situation changes.

Key to this is encouraging the team to own the process, to be invested in it. For me, a sign of a mature team is owning actions from retrospectives with the same responsibility as they have for building quality software. They are invested in both equally because combined they allow them to achieve their goal. This is built slowly over time with enthusiasm, retrospectives and responsibility.

Resist the urge to replace what the team have. Work with what you have and remember point 1.

5) Give the gift of consistency

In my experience, most things have already been tried by teams who have been around for a while.

Just because something did not work in the past does not mean it will never work. It might be that the time was not right. It is more likely it was not given a chance.

The difference between trying something and using something is consistency. You need to consistently do something for a while until it becomes habit.

These can look like rules and my goal is that these are owned by the team not mandated by myself. You know they have become habits when individuals would defend them if they were taken away.

Being consistent about applying something new is the enabler that allows this to happen. I was pretty terrible at this but I have seen the benefits of being rigorous in applying something new, so I had to learn how to do it. You know you are getting somewhere when others uphold the consistency too.

Wednesday, 27 June 2018

Stream or Team?

I have been working in a scaled environment for a while and the addition of new teams is a regular occurrence.

Recently I have been seeing that what we call a team is actually a stream. In this context a stream is a priority of work that needs to be done in parallel with another priority of work.

Here are some tests myself and one team came up with to sanity check a new team based on our previous experience.

It's a new team if:

1) The team own their code base and can make technical decisions without upsetting, involving or discussing with anyone else

2) There is a backlog of work and the size of the domain ensures the team will have work for the foreseeable future

3) The team can deploy whenever they need to without needing to plan or consult with anyone else

So let's go through some of the learnings that led us to these statements.

The main part of this is around autonomy and responsibility. Picture a team that realises a significant change to the way they branch their code would solve problems they are having. The empowerment we want to give is that they can act on this insight and change whatever they need to change to make them more effective. It's good for them and for business since they waste less time.

Imagine now that they have to validate this change with some others. Worse they have to persuade them that this will help them too. Decisions by the team need to be backed with the autonomy to make those changes as well as accepting the responsibility for doing so.

If it doesn't work out it only affects the people who decided it and they hold themselves accountable for the decision. This is why autonomy and responsibility are twins - one makes little sense without the other.

A repeating thing I see is the call for feature teams to be spun up to focus on a specific deliverable. This often ignores the longer term effects of this decision, namely who will support this new feature once it has been delivered into production. In my opinion, this requirement is best handled by the team who created it to avoid hand offs between support or ops team that might be present in the business.

Longer term side effects could also see knowledge about the feature lost as the team is dispersed and the feature is no longer actively developed. Different strategies need to be used in terms of documentation and testing as we need to ensure we preserve the feature, do not regress it and are aware it even exists. These all problems get worse with time - the longer we don't work on something, the more it drifts in the realm of fear and 'legacy'.

Ensuring work can easily be deployed into production by a team is fast becoming a standard in fast moving organisations. Allowing teams to do this whenever they need to is a key enabler in them producing high quality software with lower risk. Inferred in this ability to deploy is the ownership of the environments that make up a teams path to live.

Any sort of sharing or gating of systems that help a team get feedback on the quality of their software is counter productive. The team need to own these too, allowing them to change their ideas and strategies in line with the problems they need to overcome. Some gates may be necessary, such as change control or regulatory requirements but they can always be adapted and tuned to help developers as much as possible.

Teams owning their area of the world and knowing there is a vision for them is a powerful thing. It helps us create a sense of purpose and belonging, along with all the disciplines we value in building and keeping this running. Forming a team around a transient feature is not the same, it feels 'different' and can miss the essential sense of ownership and responsibility that benefits the business.

Making sure the area the team work in is actually big enough is key here. Too small and any hope of keeping people challenged is going to be hard. Making it too large will also making it harder to ensure a uniform understanding across the team. Knowledge silo's form easily in larger teams and the effects are subtle. It can go unnoticed that a specific individual is an enabler for others since they are needed to start or complete specific types of work.

Following on with that thought, the architecture of what you are designing will enable or block teams from being able to form. It might not be possible to simply carve up an existing architecture and assign different parts to different teams. There are often shared components or services which do not sit neatly in your new boundaries. There is a reason why discrete, contained microservices have become more an more popular recently....

There are other strategies you can employ but they all have varying pro's and con's e.g. component ownership seems like a good idea until you cannot balance keeping the team supplied with work and building things the business wants - you cannot guarantee that every component has an equal share of new work. Making sure a team has valuable work to do is among the most basic requirements for a team, so having a team structure that does not make this easy does not make sense.

I use these tests whenever there is a requirement to add more people. There is a sweet spot for the number of people in a team but also the number of teams based on your situation. These reflect my own experiences and I'm sure there are stories that conflict. I would love to hear them - how do these tests sit with your own experience?

Friday, 22 June 2018

Retrospective: Health Check Retro

Across the organisation I work with, we do a quarterly health check which is very much stolen from the excellent work Spotify did way back in 2014.

One of the problems our community of practice brought up was follow up by the teams themselves. We did this which gave the organisation this fantastic view of how we feel about the teams we work in but the teams never used the same information to improve. Odd right?

I was guilty of this and so I decided to have a retro to focus on improvements the team wanted to make before the next health check.

The setup for this retro was to get the team to vote on the areas which they wanted to see the most improvement in. This was a really quick dot voting exercise at the end of a stand up.

In the retro itself, these are the focus areas. We kick off by asking the team to list the problems they see in each of these areas. I like time boxes and gave them a whopping 7 minutes to pull these thoughts out into a flurry of post its.



I now pick on someone in the team to group the post it so we can see some themes. This is often the person who has used their phone the most or failing that a BA (since they usually have a knack for seeing some groups).

Next, we focus on just a few problem areas by dot voting. Getting them to list the problems means I can now complete the setup and ask them to find solutions for the problems we came up with, again giving them an aggressive time box to work to.

We now go through everything we have come up with, clarifying anything that is abstract (there are always a few) and asking some questions to get people thinking about what they are trying to solve:

How does this help solve the problem?
How is this related to the problem?
If you did this, what do you hope to fix?
Will this fix the whole problem?
What else might be have to do to fix the problem?

This bit is to clarify what everyone has come up with, which is important since we are going to ask people to own these.



The last part of this is the ask people to come up and choose 2 solutions. One that they put into practice this iteration and another one which is longer term. If people look like they lack enthusiasm, point out that the first people get to cherry pick the best things.... that usually helps.

They each read out what they chose for this iteration and talk about what they intend to do. We can keep on top of these in stand ups, asking what help we need to give to keep up the momentum.

Their homework is to think about how they will bring their other longer term task into action and what help they will need, which we will discuss in the next retro.

Thursday, 21 June 2018

What else can you get from source control?

A while back I presented at a couple of conferences with my good friend Helen Meek on the subject of feedback in organisations and teams.

We created a process you can do in your own organisation to help you score feedback mechanisms in a range of dimensions, allowing you to discover ones which are relevant to your organisation.

The site we created for this is still around, if you would like to have a look. We updated the site with the outputs from each of the sessions, giving an aggregated view of about a hundred people rather than just ours: http://swimminginfeedback.blogspot.com/

Some lucky people even got a set of cards, allowing you to quickly choose ones to look into using a few different games. Our inspiration for the format was 'Top Trumps', a card based game from our misspent youth.

We did this because we wanted to open people's eyes to the huge amount of feedback mechanisms we have in our organisations and how few of them we actually use to find, maintain and inspire improvement.

These are some ways of using your source control to help your teams see some new things, depending on what you want to do:

Changing Branching Strategy

Git has made it super easy to branch and merge. The downside of this is we can often live in a branch for too long, delaying intergation that should be verified by running our automated tests. There is a cost for CI and it is usually only run on key branches - main and/or development depending on your strategy.

Moving to trunk based development is something my team are currently working on. There is a lot of heavy lifting in our build pipelines that we need to do but there are also more subtle changes in the way we develop, which I think will take a longer time. The question I had was: how do we know we are getting better at this?

 In this instance we can mine source control to show us data that we should use to help bring about this change:

How many branches have we got?
Are we nesting branches?
How long do the branches live for?
Are multiple people committing to branches?
How many commits are we doing per day?

The habits we have are often the hardest to change. Using these bits of data we can have a conversation about what is hold us back, maybe even what is scaring us from changing.

Test Coverage Strategy

Test coverage is a thorny subject. Tooling can give a skewed view of the world, so it should be used only as a guide. We should rely on developers assessing coverage using a range of techniques to build a more rounded picture of test coverage and where it is needed.

My observation is we rarely use our source control to help us decide on our test coverage strategy. At a basic level, we can draw a picture of how often different areas of the repository are changing. I would expect our need for comprehensive test coverage to be greater in areas of the code base that are changing frequently, helping us get feedback on if we have broken something.

If an area is not changing - there are always things that 'just work' - we should factor this into our strategy. In terms of return on investment, they don't have nearly as much impact as areas that are changing frequently and should be treated differently.

Blind Commits

In an ideal world, every code change should be linked to a story which describes the value and intent behind it.

We don't live there.

Most commits have some sort of link to a story. Maybe take some time to find the ones that didn't and find out why. These are invisible changes to your systems. Without a story what were the acceptance criteria? How were they tested? How were they prioritised?

Source control is a rich source of information, if you have a little imagination you find all sorts of things that identify problems or highlight possible improvements.