Metrics are typically a contentious topic. There are various reasons for that. Metrics can be used for continuous improvement, which is great, but they can also be used as a stick to monitor performance and compare teams. The latter can often lead to managers and Leaders saying “We need this team to work harder.”

Note: When I write capital L “Leaders” or “Leadership”, I am referring to managers, Directors, VPs, executives. This is because anyone can be a leader, but capital L implies they are in a position of authority.

I kept all of this in mind, when years ago as the Manager of Agile Practices, I was asked to create a metrics dashboard for a group of two dozen Agile teams. I was told by the Leaders asking for metrics that they wanted a way to understand when teams were struggling so they could resolve impediments and offer support. I wanted to find the right mix of metrics that would provide Leaders what they were looking for but not lead to a toxic Big Brother culture where teams felt monitored.

I wanted the metrics to be lightweight, easy to measure and provide insights into how the team AND Leadership could improve. Leadership is typically not used to being measured alongside development teams, so I thought this could be a helpful way to shine a light on areas where they could better support the teams. This would help reinforce the role that teams and Leadership play and create dual accountability.

To begin, I wrote down ideas of what led to high performing teams. I thought of some basic ones such as cross-functionality, team size (not too big!), co-location and Product Owners that are available.

There were others that, over time, I had come to appreciate much more. Items such as the team has AND respects their working agreements, has regular team outings and has direct access to users. After getting the list started, I took it to my team of Agile Coaches and Scrum Masters and we iterated on and refined the list.

What we came up with was a list of about two dozen items. They were leading indicators, which meant they could help influence change, not just record what happened. They were also binary. It was either a yes or a no, which helped us quickly and easily tally a score. 

For example:

  • The team has 6 or less developers. They either do or they don’t.
  • The whole team participates in the retrospective. Yes or no.
  • Work is visible. Is it true or not true?

The list I came up with…

 

 

 

 

 

 

 

To highlight how High Performance Enablers affects teams and Leadership, look at which items are controlled by each group.

Team Can Control

  • Team is cross functional
  • Team has and respects Working Agreements
  • Team has and respects Definition of Ready
  • Team has and respects Definition of Done
  • Team uses WIP limits
  • Team has direct access to users
  • Whole team participates in retrospectives
  • Team outing took place last quarter
  • Team has backlog items for all of the known work in upcoming release
  • All backlog items are estimated (affinity is ok)
  • At least two Sprints worth of backlog items meet DoR
  • Work has been showcased in the last month
  • Work is visible on some board (physical, Jira)
  • Work is tasked out prior to coding

Leadership Can Control

  • Team is cross functional
  • Dev Team has 6 or less people
  • Dev Team is fully dedicated (only on one team)
  • Team is co-located
  • Team has 1 Product Owner that is on 2 or less teams
  • Product Owner sits with the team
  • Team has 1 Dev Manager that is on 2 or less teams
  • Team has 1 Scrum Master/Agile Coach that is on 2 or less teams
  • Scrum Master/Agile Coach sits with the team
  • Team members have not changed in the last month
  • Team has direct access to users
  • Team outing took place last quarter
  • Team had input into setting the targeted completion date or scope

After coming up with the different indicators, one point was assigned for yes and zero points for no. Then the points were added up and divided by the total possible to get the score. I called them High Performance Enablers (HPE) because I viewed these as inputs into high performing teams. After each Sprint, Scrum Masters and Agile Coaches recorded the numbers for the teams that they supported.

Equation: 11 (Yes) divided 17 (Total) = 65%

I made sure to communicate that we weren’t as interested in the specific number – 65% – but were more interested in the trends. Meaning, is the HPE score going up or down? If it was going down, we wanted to focus on what the team or Leaders can do to help reverse that trend. If the numbers were going up, that was something to celebrate.

High Performance Enablers led to great discussions about continuous improvement and organizational impediments. The Agile Practices team was able to look at the trends and raise issues to Leaders that they could address. For example, management began reducing team size from a dozen developers to have smaller teams, which led to better communication and collaboration. Teams started paying more attention to actually using their Working Agreements and Definition of Ready, which led to better alignment and shared understanding.

Overall, the High Performance Enabler metrics were valuable and effective. They provided a forward looking indicator of how to set teams up for success and made it clear how teams and Leaders could help influence that success.