Intervention design, testing and evaluation

We work with policymakers, employers and intermediaries to co-design, test and evaluate employment, education and workplace interventions to understand what works and why. By applying a wide range of robust approaches, from counterfactual to theory-based impact methodologies, to rigorous implementation and formative evaluation, we deliver evidence on effectiveness, and build confidence in solutions through learning partnerships.

Our multidisciplinary team is knowledgeable about high quality evaluation aligned to the magenta book and includes qualitative researchers employing a wide range of theory-based, formative and process designs, and specialised statisticians and economists focusing on impact evaluation and other forms of quantitative analysis.

We use experimental and non-experimental designs to assess the economic and social impacts of employment, training and education interventions as well as more focussed approaches to determine what works for whom in what circumstances and why. We lead economic assessments and cost-benefit assessments to green book standards to establish the returns to the exchequer and wider society.

We also specialise in the analysis of large-scale data using state-of-the-art parametric and non-parametric econometric and benchmarking methods.

Read about our range of evaluation methods below.

Contact: Becci Newton

Feasibility studies

Our clients often require support to identify the best approach to evaluating the impact of interventions. Our team is skilled in identifying the best methods available, balancing investment, theoretical and practical feasibility.

Our economists regularly assess interventions for evaluability, conducting power calculations to assess potential for high quality, counterfactual methods and quasi-experimental designs. Our qualitative team provides expert guidance on when to implement high quality theory-based methods to maximise value to our clients.

Theory of change testing

Policy and programme designers often have a theory of change – a model that explains how an intervention is expected to achieve impact. We lead evaluations that test this theory, examining key assumptions through qualitative and quantitative methods, often with a focus on behavioural change.

More broadly, we apply robust theory-based methods to test the causal pathways outlined in the theory of change, including contribution analysis and qualitative comparative analysis. These methods provide deeper insight into what drives impact and strengthen the evidence base for decision-making.

We also help organisations design or refine the theory behind new interventions by developing a clear logic model or theory of change that shows how and why they are expected to work.

Systems change evaluation

We also work within complex systems and have expertise in engaging a range of stakeholders as co-designers and co-leaders of evaluation. Our approach includes assessing impacts in different areas of the system, for example through ripple effect mapping in systems evaluation. These approaches emphasise participatory and action research, formative evaluation and learning partnerships.

Impact evaluation

We have a strong research record on the quantitative analysis of the impacts of employment and educational programmes and policies comparing observed outcomes and estimated counterfactual outcomes of what would have happened in the absence of an intervention.

Our theory-based quantitative designs include experimental and non-experimental (econometric) estimators, such as random control trials, propensity score matching and other econometric models such as difference-in-differences designs.

Impact evaluation also uses qualitative designs relying on in-depth interviews, institutional analysis, focus groups and field visits.

Economic evaluation

We have extensive expertise in leading economic evaluations of employment and training-based initiatives. These link initiatives’ impacts to their costs and benefits to estimate the net monetary value (most often in terms of skills and human capital) to individuals, organisations and society.

Evidence is often expressed in return-to-investment ratios, including social returns, which can assist key decision-makers ranging from company managers to policy advisors, to understand the economic value of initiatives when planning their budgets.

Process evaluation

We have a long and well-established track record in understanding the process of the implementation of programmes. Such work relies primarily on the use of qualitative research methods such as in-depth interviews of programme participants and experts (including the Delphi method), focus groups, forums and discussion groups, case studies and field visits.

All process work is theory-based, using clear evaluation frameworks established at the onset of projects to obtain systematic implementation evidence, which can be linked to programme and policy design and configuration to learn how outcomes and impacts can be improved.

Related projects

Health-led Trials impact evaluation reports

Evaluation of the national roll-out of the early career framework induction programmes

What works in systems change interventions: A review of national and international evidence

ELATT’s learner support: process study

Evaluation of CYA’s Forging Futures programme

Flexible Phonics