Report summary: Practical Methods for Evaluating Coaching

Coaching as a development tool is increasingly prominent. However, many coaching programmes are either evaluated superficially (if at all) or only at the reaction level of the coachees. There is also very little non-partisan advice on coaching evaluation.

The IES study

The aims of the 2003-2006 research study were to:

  • examine the evidence about whether coaching is an effective tool
  • explore issues in evaluating coaching in a workplace context and identify what factors help/hinder
  • provide illustrations of how leading companies are evaluating their coaching programmes
  • develop and test a model of how to evaluate coaching programmes, which can be of practical use to companies in planning their own evaluations.

Findings

The literature and our study suggest that for coaching to sustain credibility, levels of evaluation need to increase. Suggestions arising for organisations in planning an evaluation are:

  • Adapt the traditional model of training evaluation you use elsewhere. Or you can use the evaluation framework presented as part of this research.
  • Clarify why the evaluation is being conducted. Are you seeking to prove something, improve something, or learn something?
  • Be realistic about constraints. Clarify your budget, resources available and any time constraints, and consider these in relation to your purpose.
  • Define success criteria before choosing measures.
  • Be selective in your evaluation measures. Collect data to show whether success criteria have been achieved. Consider looking for benefits well after the coaching has ended.
  • Consider the perspectives of different audiences for the evaluation and how you will access a range of viewpoints.
  • Make sure in advance that your coaches are willing to use your evaluation tools when operating in your organisation.
  • Minimise resistance to the evaluation by letting participants and managers know before the coaching starts what evaluation measures will be used and how they will be expected to contribute.

Company illustrations

Our six featured companies came from a variety of sectors and covered a diverse range of approaches to evaluation. They included T-Mobile, Corus, a global distribution company, a building society, NHS in Wales and a UK government department.

The lessons learned from the company illustrations about using methods for collecting evaluation information will be of interest to would-be programme evaluators, and are summarised here.

Business Results

If bottom-line business results are what you want, focussing on one key business indicator can be a simple approach yielding straightforward results.

It is better to plan how to evaluate the coaching before starting the programme so ‘hard’ baseline data can be collected. It is not always necessary to complicate things by calculating Return on Investment (ROI). Measuring where it is easiest provides ‘reasonable evidence’, meaning you can avoid the expense of measuring benefit elsewhere where it may be difficult to identify an appropriate measure. If you do go down the return on investment route you will need significant financial resources and statistical competence at your disposal.

Behavioural change

Multiple viewpoints and multiple data collection methods are essential when it comes to measuring perceptions of behavioural change.

Face-to-face interviews enable behaviour change to be explored in more depth, although telephone interviews can also generate detailed information and allow probing and can be a less expensive alternative.

When designing impact questionnaires looking at behaviour change, make sure your ratings scale allows for the possibility that changes may be perceived as negative rather than positive.

Surveys

Attitude surveys are a simple and non-resource-intensive method to collect reactions to coaching. Climate surveys can be useful in identifying changes in soft-skill areas, such as communication, and are especially relevant to organisations implementing coaching as a style of management.

Keep survey questionnaires short to get a better response. Response rates will increase further with reminders and chasing. They can be improved by asking coaches or line managers to distribute and collect questionnaires.

Be cautious in interpreting survey findings where the size of surveyed population is small.

Control groups

Comparing the results among coached with a ‘control’ group of non-coached individuals can be a very effective approach. Being able to compare before-coaching with after-coaching results can also be seen by some as more credible than examining post-coaching data alone. If you don’t have a ‘control’ you will need some form of benchmark for comparative analysis to assess whether the activity in question is relatively effective.

An evaluation framework

IES produced a provisional framework which was tested over a two-and-a half year period on an in-depth evaluation of three cohorts of a local government strategic coaching programme for HR executives. The framework was then refined.

Figure 1 summarises the two dimensions we propose as being key for would-be evaluators: three main areas of evidence sought and four main likely sources of evidence.

Figure 1: Key dimensions in a coaching evaluation framework

Key dimensions in a coaching evaluation framework

Source: IES, 2006

Three sets of key questions were identified as relevant to the areas to seek evidence about. Evaluators need to understand the answers before the coaching begins:

  • What do coachees expect to gain from the coaching? And how will we know at the end if these benefits are realised?
  • What does the organisation expect to gain from the coaching? And how will we know at the end if these benefits are realised?
  • What internal and external processes need to be in place to enable the coaching programme to deliver the changes expected? And how will we know if they are working in time to change them if they are not working?

The research also identified numerous perspectives that might be relevant as likely sources of evidence. However, it seems there are four main sources that it is most helpful to consider: documents (eg records of objectives), achievements, coaching contracts; coachees; coaches (whether internal or external); organisation perspective (eg line managers, sponsors, HR, staff).

The three main areas and four main sources can be presented as a simple framework, as in Figure 2.

Figure 2: A framework for coaching evaluation

Likely sources of evidence⇒
Evidence sought at:⇓

Coachees

Line managers
or sponsors
Coaches Documents
Individual level        
Organisation level        
Programme processes        

Source: IES, 2006

The report

This report is the product of a study supported by the IES HR Network, through which Member organisations finance, and often participate in, applied research on employment issues.

The report also contains details of how to use the framework, plus a selection of evaluation tools used by our early adopter organisation which should offer ideas and inspiration for those developing their own tools.