Evaluation methods

Methods
Methods for gathering data to answer these questions will be determined by factors including the type, size, timing and location of the activity; time, location, resources available for evaluation and the skill of the evaluators.

  • Surveys can provide the quickest and easiest method for gathering data from a large number of people. Surveys might be particularly useful when participant numbers are large, such as at a festival or other large cultural activity; when the opportunity to speak to participants is restricted, such as the short time between when they come out of a performance and exit the theatre; when a large sample of responses is preferred; and/or when contact information, particularly email addresses are available and the option to invite written responses is possible. However, surveys can be limited in that they are only likely to enable gathering of quick and simple responses. If the survey is administered immediately after people have had the experience (as in a theatre or gallery foyer, festival gates), they may not yet have had time to think through the impact the event has had on them.  This means that the capturing of the complexity of people’s thoughts and ideas is limited as is information about what might have caused or led people to respond as they have (causal factors or processes leading to outcomes).
  • Interviews might be most suitable when the number of participants it is important to speak to is not very large and participants can be accessed in person or via phone, by skype or other technology; and when more detailed information is sought than a numerical measure of outcomes.
  • Focus groups can be useful when it is possible to bring people together (either in person or using technology), and when interaction between them might be considered useful or important. Focus groups can be more productive than interviews and large surveys, as data can be gathered relatively quickly, the sample size increased by talking with several people at once, and more thoughtful information gathered. Another advantage is that participants can learn from one another as they exchange and build on one another’s views, so the evaluation process can be iterative and experienced by participants as an enjoyable learning process.
  • Expert opinion involves the use of experts’ assessments as data. Experts might be people very experienced in running this type of activity (for example, venue managers might have specially developed skills in ‘reading the room’, enabling them to make a judgement about audiences’ responses). In this form of evaluation, the mean score of assessments provided by experts offers an affordable and valid data collection process. Outcomes could be considered proportionate to the possibility offered by the project: that is, the best possible achievement for an activity of this type would be scored a 10. This decision about what would be the best achievement for a project of that type could include consideration of resources used (staff and volunteer time, financial, infrastructure, etc), as well as the particulars of the project: duration, context and skills of leaders and participants.
  • Participant observation involves a researcher or evaluator observing or participating in an activity to find out more about the experience of others involved. Participant observation always takes place in community settings, in locations believed to have some relevance to the research questions. The method is distinctive because the researcher approaches participants in their own environment rather than having the participants come to the researcher. Generally speaking, the researcher engaged in participant observation tries to learn what life is like for an “insider” while remaining, inevitably, an “outsider.”
  • Participatory methods, such as Most Significant Change that uses stories as a method.
  • Arts-based- using arts as a response to an experience.
  • Mixed methods (more than one of the above methods).

To create a quantitative rating for these questions, a rating scale can be used. The scale of 0-10 is recommended where 0 is none and 10 is the most change that a person could imagine for themselves on this measure. This scale is not standardised or assessed against an external norm or benchmark, but allows every respondent to decide for themselves what is the greatest (or least) stimulation of creativity or aesthetic enrichment, etc., they could imagine for themselves, and rate their experience in this activity accordingly.

When to ask the question: pre-test, progress report (formative evaluation), post-test (summative evaluation)

The question can be asked in different forms at the beginning, during and after the activity.  If it is asked before the initiative, it can be used to gauge the baseline (what was our situation in relation to this measure before the initiative began).  If it is used during the initiative, we can gauge our progress (how we are going during the initiative), and if we use it at the end, we can assess the outcome (what has happened by the time the project is completed).

For example, if we are interested in understanding how our initiative changes our participants’ appreciation of Australian Indigenous cultural practices, we would need to know what level of appreciation participants had before they commenced the activity, and the level they had after it was over. If the initiative goes for a period of time (such as an artists’ residency, term program of classes, intensive workshop or theatre season), we might want to know how change is progressing during that time. This will enable us to adjust our program before it is too late, maximising our impact.