Recently I have been party to discussions around how to measure impact or ROI of an agile coach? How to measure the impact of agile coaching? How do you justify huge cost of agile coaches?
Here are the typical metrics used to measure the success of agile coaching:
- Training conducted and average feedback on the trainings conducted by the coach
- Coaching clinics or webinars being conducted
- Training material being created
- Teams being coached
- Forming and contributing to communities, conferences, blogs etc…
- Number of transformation / change stories delivered
- Feature teams formed or not
- Test automation and automation percentage delta of the team under coaching
- etc.. etc.. etc.. (Fill in for yourself)
I do not recommend activity based measurements as it infers only activities and not its outcome. I would rather go after outcomes which would drive appropriate activities.
Remember: Metrics drives behaviour.
If you measure on activity you would only see increase in activity and may not see increase in desired outcomes!
Let’s look at most popular measurement, lead time, first.
“Lead time” is a term borrowed from the manufacturing method known as Lean or Toyota Production System, where it is defined as the time elapsed between a customer placing an order and receiving the product ordered.
Lead time “delta” refers to the “change” in the “mean” lead time of features delivered by the team under coaching over a period of time. Ideally the mean lead time of feature should reduce over a period of time. That’s the final objective or goal of agile, right?
However there are certain challenges in measuring lead time.
- First we do not have a baseline.
- Second team formation changes from component based teams to application based teams to vertical, cross-functional, cross-application, cross-components, change based team. Hence, even if you have a baseline you have a baseline for team comprising different team members then vs now. Hence they are not comparable.
- Ideally you first form good feature teams first, then capture baseline lead time and then measure delta for a fair apple to apple comparison.
- But it takes time! Sometimes a lot of time! In worst case scenarios, it might never happen.
- There are other issues with lead time itself. For example; most of agile coaching is initiated by IT departments. Feature identified does not mean funded, approved and prioritized to be picked up with IT team. This means that the feature is identified but never picked up for development!
- Suppose if the feature was indeed prioritized and picked up by IT team. In fact it was delivered to users for UAT in 2 weeks. However, users take their own sweet time to UAT?
- Time elapsed between a user story in play till installed in UAT is often called as cycle time. This is the reason why most of the coaches and teams wants to measure and track mean cycle time and not mean lead time. However, does it not defeat the entire purpose of the agile transformation? Don’t get me wrong, I would like to measure mean cycle time. But not for measuring agile coaching impact.
- Even in best case scenarios where the ideal route of first forming feature teams, then capturing baseline and then measuring mean lead time delta is followed, I would argue if the change can be attributed to coach alone. Suppose if lead times have come down from 8 months to 4 weeks, was it due to coach? Was it due to agile process itself? Or was it due to the willingness and hard work of the team?! How much of this success can be attributed to coach and his coaching? How do we measure it?
Hence I am not in favour of lead time delta as preferred metric to measure agile coaching impact.
So then, what are the other ways of measuring agile coaching impact? How do you measure it? What are the pros and cons?