This resource is being developed to support arts and cultural organisations evaluate outcomes of their cultural development activity. Information provided here is regularly updated as the outcomes and methods to measure them are trialled in different communities and for different events.
The starting point: what do you want to know?
Before you evaluate, you need to be clear what you are wanting to find out about your activity. If you have followed the planning process recommended in CDN’s Planning Framework or Takso, you will have established a maximum of two objectives for your activity that your evaluation will need to assess. If not, we recommend you select outcomes from the outcome schema that your activity is intended to achieve. At least one of these should be a cultural outcome, given that your activity is a cultural activity. You might have an outcome or outcomes from other domains, such as social or economic, that you plan for your activity to address. Select one or more of these from the outcome schema before you start. We recommend no more than two outcomes be selected, as each one requires work to collect data and analyse it.
Evaluators should always seek to minimise the number of questions asked, to reduce the potential of survey fatigue. Both for the data-collector and the data-provider benefit if less data is collected but used more wisely.
Stakeholders to be invited to contribute data
The next task in evaluating outcomes is to understand who you expect to have been impacted by your activity. These are your stakeholders, whose views you will need to gather through an evaluation process. There might be a range of different stakeholders who are impacted differently. This might include community members attending the event as audience members (receptive participants); community members or professionals who contribute to the running of the activity in some way (enablers); those who performed or exhibited (creative participants); those who funded or managed the activity; other people indirectly impacted, such as families of participants, neighbours of the activity, local business owners, etc.
Thinking hard about who these stakeholders are will enable you to decide on a method, timeframe, sample size and questions that you could ask to understand their experience.
Before you start: considering ethical processes
Explaining the purpose of the evaluation
It is important that potential participants have adequate information about the evaluation process to enable them to give informed consent for the information they will be asked to share. This should include information about the instigator of the evaluation (who asked for it to be undertaken); the administrator of the evaluation (who is collecting the data); and the purpose of the evaluation (what use it will be, for whom).
Privacy and confidentiality of information gathered
Participants should also be informed about how data will be gathered, stored and used and how their privacy will be protected. Will you ensure that participants’ names or photos will not be used? This may be an informal process such as verbally sharing with the participant all of this information when you approach them, for example to respond to a survey. Participants need to give their approval for you to gather data from them. This might be verbal approval when you are administering a survey in direct communication. For evaluation processes that involve more investment of time and greater potential for risk, a more thoughtful consent process is advisable. Information in written form might be advisable in these situations.
Reducing risk in disclosure
Some of the outcomes potentially expose participants to experience vulnerability through disclosure, by triggering impacts of negative experiences in their life. All of the outcomes have this possibility, but some are potentially more exposing than others, putting respondents at higher risk. This includes cultural belonging (cultural belonging); physical and mental wellbeing (social); access to beneficial networks and resources (civic).
Sample information and consent form
This form suitable for community members engaged in a cultural development project and responding in extended interview or focus group. Evaluation-consent-form-sample
Evaluation methods
Methods for gathering data to answer these questions will be determined by factors including the type, size, timing and location of the activity, the time, location, resources available for evaluation and the skill of the evaluators. Options include:
- Questionnaires can provide the quickest and easiest method for gathering data from a large number of people. Questionnaires might be particularly useful when participant numbers are large, such as at a festival or other large cultural activity; when the opportunity to speak to participants is restricted, such as the short time between when they come out of a performance and exit the theatre; when a large sample of responses is preferred; and/or when contact information, particularly email addresses are available and the option to invite written responses is possible. However, questionnaires can be limited in that they are only likely to enable gathering of quick and simple responses. If the survey is administered immediately after people have had the experience (as in a theatre or gallery foyer, festival gates), they may not yet have had time to think through the impact the event has had on them. This means that the capturing of the complexity of people’s thoughts and ideas is limited as is information about what might have caused or led people to respond as they have (causal factors or processes leading to outcomes).
- Sample survey form: Wonderland-Shirs-music-festival-audience-survey
- Interviews might be most suitable when the number of participants it is important to speak to is not very large and participants can be accessed in person or via phone, by skype or other technology; and when more detailed information is sought than a numerical measure of outcomes.
Sample interview questions to come - Focus groups can be useful when it is possible to bring people together (either in person or using technology), and when interaction between them might be considered useful or important. Focus groups can be more productive than interviews and large surveys, as data can be gathered relatively quickly, the sample size increased by talking with several people at once, and more thoughtful information gathered. Another advantage is that participants can learn from one another as they exchange and build on one another’s views, so the evaluation process can be iterative and experienced by participants as an enjoyable learning process.
- Expert opinion involves the use of experts’ assessments as data. Experts might be people very experienced in running this type of activity (for example, venue managers might have specially developed skills in ‘reading the room’, enabling them to make a judgement about audiences’ responses). In this form of evaluation, the mean score of assessments provided by experts offers an affordable and valid data collection process. Outcomes could be considered proportionate to the possibility offered by the project: that is, the best possible achievement for an activity of this type would be scored a 10. This decision about what would be the best achievement for a project of that type could include consideration of resources used (staff and volunteer time, financial, infrastructure, etc), as well as the particulars of the project: duration, context and skills of leaders and participants.
- Participant observation involves a researcher or evaluator observing or participating in an activity to find out more about the experience of others involved. Participant observation always takes place in community settings, in locations believed to have some relevance to the research questions. The method is distinctive because the researcher approaches participants in their own environment rather than having the participants come to the researcher. Generally speaking, the researcher engaged in participant observation tries to learn what life is like for an “insider” while remaining, inevitably, an “outsider.”
- Participatory methods, such as Most Significant Change that uses stories as a method.
- Arts-based- using arts as a response to an experience.
- Mixed methods (more than one of the above methods). Rating scale to create a quantitative rating for these questions, a rating scale can be used. The scale of 0-10 is recommended where 0 is none and 10 is the most change that a person could imagine for themselves on this measure. This scale is not standardised or assessed against an external norm or benchmark, but allows every respondent to decide for themselves what is the greatest (or least) stimulation of creativity or aesthetic enrichment, etc., they could imagine for themselves, and rate their experience in this activity accordingly. Questions to ask The outcome schema has sets of questions developed for each outcome.
- Getting started asking questionsThese questions may provide some challenges to participants, both because the concepts being explored are new to them, and also because they require people to reflect on their own experience, which may be something they do not do very often.Here we offer some suggestions for how the beginning of the survey can be framed, to help participants feel comfortable in taking time and asking for help to respond appropriately.
Thanks for taking the time to participate in this survey. This is designed to help (name of organisation here) to understand more about the experiences of participants in this (name or type of activity here).
In this section of this survey, the questions are a bit complex. Because of this, I’ll ask the question once and then I’ll repeat it. If you need to hear the question again, or need to have any part of it explained, that’s absolutely fine, just ask me.
Take as much time as you need in thinking about each question and answering it. After I ask a question, I’ll just wait until you’re ready to respond. Okay? Great, let’s get started.
When to ask the questions: pre-test, progress report (formative evaluation), post-test (summative evaluation)
Question can be asked in different forms at the beginning, during and after the activity. If it is asked before the initiative, it can be used to gauge the baseline (what was our situation in relation to this measure before the initiative began). If it is used during the initiative, we can gauge our progress (how we are going during the initiative), and if we use it at the end, we can assess the outcome (what has happened by the time the project is completed).
For example, if we are interested in understanding how our initiative changes our participants’ appreciation of Australian Indigenous cultural practices, we would need to know what level of appreciation participants had before they commenced the activity, and the level they had after it was over. If the initiative goes for a period of time (such as an artists’ residency, term program of classes, intensive workshop or theatre season), we might want to know how change is progressing during that time. This will enable us to adjust our program before it is too late, maximising our impact.