Reading Outcomes Framework Toolkit

Introduction to methods

This section is about evaluation methods. The purpose of carrying out evaluation is to investigate how an activity, programme or project is working and whether it is making a difference. There are a variety of ways that you can do this.

This Toolkit focuses on impact evaluation: measuring the change that has occurred as a result of your activity. While your findings might show you are having the impact you intended, it is also possible that there are unintended impacts or that you are not achieving your outcomes. Finding this out is just as important as finding out that you are achieving your aims; it can help you improve your project or focus on more valuable activity.

When you start planning your evaluation you need a clear evaluation question that sets out what you are investigating. This will guide your decisions about how to carry out the research. Your evaluation question might be specific, for example 'Is my activity having a positive effect on the mental health of my participants?' or more general 'Is my activity achieving the intended outcomes?'

For an impact evaluation, being clear about the intended outcomes of your project is vital. It is a good idea to create a theory of change for your activity to set out how you believe your activity results in your intended outcomes, which you can then test.

Quantitative and qualitative methods

There are a number of different methods that can be used for research. The main types of data produced are quantitative and qualitative:

(both quotes from 'Listen and Learn: How charities can use qualitative research', NPC, 2016).

Both types of data are valuable: quantitative evaluation can tell you about the scale of impact and qualitative evaluation can help you understand impact in more detail, including how it might have been achieved or why it hasn't been. Often quantitative and qualitative methods are used together, to check the consistency of findings, explore any differences and to develop a deeper understanding. This is sometimes called 'mixed methods' evaluation.

This Toolkit focuses on quantitative evaluation and investigating impact on participants. It includes questions that can be used in surveys completed by people that take part in your activity, either as a 'before and after' measure or in a one-off survey about the activity after it has taken place.

Qualitative tools are more difficult to include in the Toolkit because they tend to be specific to the activity, but we hope to add some in the future. For guidance about qualitative research click here. Using an observation guide to carry out structured observations of participants involved in your activity is one way of collecting qualitative data, for guidance click here. For information about a range of data collection methods click here.


dave-cherry-1.jpg
Credit: Dave Cherry

Process evaluation

This Toolkit is about impact evaluation, but process evaluation is also very important. Process evaluation focuses on operations, implementation and service delivery to help understand how impact is achieved and "what works" to make impact happen. It often investigates the quality of the intervention and might explore the different ways a programme is delivered. For guidance about process evaluation click here.

Developing your tools

This Toolkit includes sample questions that you can use to create a survey to evaluate your own activity to encourage reading for pleasure and empowerment. The Toolkit does not provide 'off the shelf' questionnaires that are ready to use for evaluation, because projects have different intended outcomes and you will need to develop bespoke research tools that work for you. The Toolkit includes support and guidance to enable you to do this.

Many of the questions in the Toolkit do not refer directly to reading. However, they can be used to measure the impact of an activity to encourage reading if they are asked before and after the activity takes place. Asking questions that don't refer specifically to your activity can avoid leading respondents towards a certain answer. Attributing impacts directly to your programme is difficult though - you might need to 'triangulate' your data (using a variety of sources or methods) or identify a control group to compare your results with.

For some of the outcomes we haven't been able to identify questions that meet our criteria. This might be because it costs to access the question, a long and detailed set of questions are required, or survey questions are not the best approach to measure that outcome. For guidance about how to develop your own questions click here.

We hope to source or create tools to fill the gaps in the future. If you know of relevant questions that could be added let us know and we will consider them for future versions of the Toolkit. Send full details to The Reading Agency at [email protected]

Data analysis

If you need advice about carrying out data analysis click here. The Toolkit includes 'scale' questions which ask how much the respondent agrees with a number of statements. In some instances a scale question is measuring a single concept e.g. empathy, using a number of statements. To analyse the responses you can assign numbers to the answer option (e.g. 1 for strongly disagree up to 5 for strongly agree) and calculate an average across the statements.

Make sure to 'reverse code' negative statements (e.g. 1 for strongly agree and 5 for strongly disagree). In some cases information about scoring can be found at the original source of the question, which we have provided links to. Some questions are measuring more than one concept, so you should not calculate an average across the different statements.

If you collect any personal data as part of your research you need to be very careful about how you use it and ensure you keep it securely. For advice click here.