Evaluating Evaluation in applied performance work
What to explore/revise/go into more depth in second essay edition:
That there is a central archive for applied practices, Arts alliance is involved with the criminal justice system and they have an archive which has a huge amount of data that is now used. Furthermore Aesop has an archive on the mental health and arts services. Centre for Youth Led research is doing the beginnings of compiling a central archive however Mita argues not enough commitment/follow-through
what is the structure of the archive?
how is evaluation practice influenced by the archive?
why do these sections in the wider field of applied have an archive?
what are the theoretical/cultural practices/values/states that have facilitated this?
what is needed to build an applied archive?
what are the difficulties in this?
Evaluation and gratitude
teasing out the nuances in affect/effect/practice etc.
Confirmation of my theory that the management/worker business practice divide is functioning within applied field between practitioners and funders
Specific case studies of funders and their different relationship to the practitioners. Funders are not a homogeneous blob with homogeneous practices. Two case studies from Mita are Lloyd’s Frank Bank and Research Council
What are the different process implementations from the funder side? How flexible are these?
different fear cultures
different nourishing cultures
what are the Art Council like?
what are the funders relationships to one off versus sustained funding
myriad other questions yet to be determined
Evaluator as middle man/intermediary/advocate/activist
With the management/worker cultural divide, there isn’t even a shared language with which to discuss projects and their processes/outcomes. As such the role of the external evaluator is often a whistle-blower, a position which is afforded them because they are not the ones receiving the funding.
Credit lack of imagination in evaluation practices to the neoliberal business model that has become the structure/culture of evaluation
The practice of evaluation report writing itself, feel I need to understand better how everything is complied into one thing
Talking to Mita
My ESSAY MAIN POINTS:
Talking to Stella Barnes, 30+ year practitioner/evaluator:
· Ethical considerations
o Her lingering interest is with honest evaluations
§ Participation itself engenders a personal investment in the project. In her practice, Stella seeks to ‘interogate’ where evaluative responses are coming from. She highlighted in her work with new migrants that honest/critical evaluation responses were difficult as culturally with these participants it is not polite to criticize and there is an overwhelming sense of gratitude that wishes to be conveyed.
o She also explores the tension between funding and the creative work, from multiple perspectives.
§ The participant may fear that their response will affect future funding negatively and the facilitator the same with reporting any qualitatively negative results. In her experience of 25+ years in the field, she observes the shift away from honesty has gone in parallel with a shift in funding structures.
§ Where the majority of funding making up the core of resource support used to be core funding from bodies like Arts Coucil, and the minority of funding was a small amount raised on top, now the opposite situation holds true.
§ She does observe, however, that funders do want ‘the honest stuff’ as they’re interested in seeing ‘what you’re learning’
o What does failure look like? I asked. She responded that it depends on the boundaries of the project, but that for instance she is working with co-creation at that moment. Co-creation fails when it is artist-led she explains, a success is where the participant has the ownership.
§ Discerning this though again is very complex as a participant may feel ownership and yet express deference to the facilitator out of gratitude for the time, resources and support in the project.
· Conditions of evaluation
o In terms of monitoring and evaluation, she returned again and again to the point that the questions you ask yourself are just as important as the questions you ask to other people.
§ There are aspects of evaluation based on the facilitators interest, in the same way that the facilitor has their own research to do with performance praxis, the ‘reflective practioner’ (Taylor) / evaluator may also have. WHAT IS THE EFFECT OF THIS?
· Barnes’ finds open questions more interesting. Questions such as ‘what is your experience of today like?’ and ‘What does being creative feel like?’
o Funders often give the evaluation tools they expect to be used upon a projects completion, when its funding as been granted so it may begin; ‘the project will change to meet these criteria’ Barnes remarked.
§ There are necessary aspects of evaluation. The will often include participant skill increase for instance.
o The most honest and least transactional types of feedback from partipants comes from, in her experience, open conversations in informal spaces.
o IF ACADEMIC:
§ then there is a conflict between the rigidness of questioning in academic methodology through repeating questions asked in the same way and asking the same/similar questions in a different way. Barnes’ observation is that the latter is what can gather more honest feedback from the participants than what collates as part of academic research methodologies. And furthermore, she observes that results from the academic methodologies are not necessarily more honest.
§ Barnes further observes that with formal questions there is a suggestion of a correct answer (to increase in x), which is murky ethically.
· Observations on evaluation:
o As a participant, assessing your own development is very complicated, sophisticated and difficult to determine on a universal metric scale.
Knowing more about yourself might mean you are now less confident but more self empowered for instance, but evaluating this for your self is very complex.
Talking to Maria Kacandes, corporate strategist and implementer of projects that run internationally:
(All about evaluating evaluation – when is it effective and when is it not)
· Evaluation requires having set objectives. These can be set by different groups, such as the participant, organisation, facilitator.
o Participant is the best- why?
· It is better to have data than not
o Simple input output studies lack the reflection of multiple perspectives that Taylor suggests as being key to applied practice. Yet they should not wholly be discredited as they do provide an outsider to look into an intimate space and if this outsider is a funder or fellow practitioner, than that has its uses.
· Complex and extensive reports are good.
o Mulitplicty of perspectives, metrics, qualitative and quantitative data, documentation, image and videos can be included, the latter providing space for process to be disclosed. Having a more extensive evaluation document means that process and result can be levied in proportions that most accurately convey the process as well as are informative and useful for the participants, facilitators and field at large. This means that the needs of more groups have the potential to be met and/or represented by the evaluation. Whose needs are met by current evaluation means is a key question.
o Wider societal data also has a place in large reports which can situate the work in its cultural context. This is useful for analyses to be made form the report more accurately if being read across cultures/internationally.
o
§ [This is more about how evaluation is being done] – or you could put this in your “effects of evaluation” part to describe the negative effect this has on the work] Stakeholders determining resources to give on the back of an evaluation are almost always looking for projects which have the greatest efficiency: the fewest resources to create the most output. Therefore, providing a comprehensive evaluation that can explicitly demonstrate that a project benefited from being sustained; hence required the resources it had or need more is key. Efficiency in today’s austerity economy seems to mean consolidation not expansion, this kind of data is a key determinate either way.
· People tend to measure things that are easy to measure [Biases in evaluation practice – why it is difficult to get accurate evaluation – because of the method]
o Things that are within their discipline or easily quantifiable. People don’t tend to measure things that don’t usually get measured, such as in my project this would be the effect on the cared for pf the carers who attend my workshops. There is a great difference between measuring things that have never been measured before and measuring things that have established methods. Different methodologies would need to be explored as well as the work needed to gather the data.
§ This extra work is a large limitation in developing evaluation to include these more difficult/new kinds of data.
o Input are easier to measure than outputs [Biases in evaluation practice- why it is difficult to get accurate evaluation – because of the evaluator]
§ ]Furthermore, there are often outputs which provide conflicting/unflattering information in relation to the project’s key objectives. From speaking to long time facilitator and evaluator Stella Barnes, she reflected that there is a culture of fear. Admitting any failure in an evaluation, especially if you are an inexperienced name applying for funding may result in efficiency ending your project completely. [
§ It is interesting to note, of course, that inputs and output are not causally linked by any necessity. And such is the nature of introducing scientific metrics into lives, which the web of cause and effect is tangled to say the least. One of my research interests going into writing was in the self-congratulatory nature of some community projects, especially those I have worked in which work with vulnerable people or disabled individuals. Whatever the quality of the end work, for example if lacking in aesthetics or research engagement, both the participants and facilitators blacked any critical engagement with the work.
§ Another interesting phenomenon to note regarding the causality between inputs and outputs is the feeling of ownership/credit of institutions or facilitaors on the participants for any results achieved. When making an evaluation on whether objectives were met, for instance if these objectives were empowerment of the participants, it is loaded very differently politically to empower someone to give someone the tools with which to empower themselves.
Initial questions
My thoughts for this writing are sort of two fold,
investigate into how evaluation is being done and redone in community arts projects
(is it evaluated year on year and changed or is the evaluation the same but the projects different? Or a blend? Why for each?)
Does evaluation drive improvement? For whom?
Describe and explore evaluation practices that are embedded into the whole community project from its conception and throughout its course.
creative evaluation methods and
how these interact with the hard data needed for funders
Other thoughts
3. I want to look into this using some examples and ethical questions from my own work with some of the carers at Carers Center Tower Hamlets, I've received really positive feedback from the carers and the center, however am aware also that they are carers and are likely caring for me a bit in being so kind.
a.It makes me wonder how participants evaluate work in a context where their gratitude may be implicit.
b. how evaluation practices can allow for participants to flag things that weren't good enough or needed improvement when often evaluation is done face to face with the facilitators/will be read by them
4. There's a lot of self-congratulations in community work by the organizers also, which I'm not criticizing but wonder how it limits evaluation's potential to transform and lift up the works practice and aesthetics
a. I think that aesthetics should still be considered, not forgotten, still used and used well to make a good point.
b. people empower themselves, programs may give the tools for this. Self-congratulation work limits/reframes this.
5. Is causality proved? Participants may be better off after a program but may just be from spending time with people or any reason (life being the big ball of spagetti it is);
how does evaluation investigate causality?
6. How is evaluation with a heavy gear towards funding affecting the creative encounters in the participatory project itself?
7. Evaluation requires having set objectives
a. who sets these? : participant, institution, facilitator
b. how are they renegotiated as the project changes and develops?