Example of a creative placemaking event. Courtesy of Kansas State University.
One of the biggest challenges when implementing cultural programs is the ability to measure impact. Programs are evaluated in order to determine the efficiency of their processes and the efficacy and effects of their outcomes on communities or social parameters. Without question, however, it is a complicated endeavor to tease out the long-term, unintended, and indirect impacts of these programmatic outcomes.
The recent identifiable trend for cultural policy practitioners has been the increase of creative placemaking projects, which seek to affect positive changes on communities and neighborhoods through culture-related policy levers. However, these projects present potential several difficulties to evaluators, some of which are particular to cultural projects and others that are standard challenges for any type of evaluation project. Of these difficulties three stand out as the most problematic and frequent: (1) confusing correlation or association with causality, (2) the lack of trustworthiness and accuracy of data, and (3) design issues given the particular attributes of the creative placemaking process. I shall talk briefly about these three items and then propose some general, but oft-overlooked, guidelines that should prove helpful to the evaluation of creative placemaking projects.
First, both novice and experienced evaluators alike commonly confuse correlation with causality. The phrase “Correlation does not imply causation” was made popular by Edward Tufte, the American statistician, and it was meant to explain that an association between two variables does not indicate that one causes the other. For example, a creative placemaking project focused on the reading habits of a certain neighborhood might have been implemented at the same time as a local school reading initiative. Therefore, it would not be clear which program is responsible and in what percentage for any potential improvement in reading habits among the targeted community. The real challenge is the creation of a counterfactual, or a hypothetical situation, where all remains the same without including the program of interest. This practice provides a hypothetical alternative that can be compared with the real results, thus identifying the effects of the program in question. This shall be discussed further in the guidelines for evaluation section.
One of my favorite examples of how correlation does not imply causation from Tyler Vigen's blog, Spurious Correlations
Second, in regards to the trustworthiness and accuracy of data, evaluators are faced with the challenge of judging the validity of the data's source. This is extremely important but often overlooked, and can generate immense bias and difficult feedback and improvements for future placemaking projects. Problems usually arise when (1) program evaluation uses incorrectly collected data, (2) it has an excessively qualitative approach, and (3) it’s undertaken by unsupervised by people with little or no experience in the program evaluation process. Any effective evaluation has to consider not only the utility and feasibility of the evaluation, but to reveal and convey technically accurate information – both for the direct beneficiaries of the evaluation and for potential future analysts that may use the collected data.
Finally, the third difficulty of creative placemaking impact evaluation is related to the idea that most of the these projects tend to be more inorganic and spontaneous and have less well-defined objectives, metrics, and intermediate process indicators than initiatives from other policy areas. As pointed out by Ian Moss in Createquity, “creative placemaking has an outcomes problem, given that the project team had a clear idea of what it was putting into the process and what it hoped to get out of it, but much vaguer sense of how it was going to get from Phase 1 to Phase 3." A solution for this issue is to introduce some benchmark indicators into the design phase before the project starts. Therefore, the evaluation of the process and outcomes will be easier, since key performance indicators will have been embedded in the project plan and tracked. Furthermore, if these benchmark indicators are common across projects (either within in a field or within a given organization), data comparison in the future will be much easier.
South Park’s classic humorous take on bridging objectives to outcomes.
General guidelines to improve creative placemaking evaluation
a). The Mixed Methods Approach
The most effective strategy for preventing the aforementioned difficulties from compromising hard-won data is to explore a mixed methods approach that uses both qualitative and quantitative tools to measure programmatic impact. The benefits of this approach is that it allows evaluators to understand how local contextual factors help explain variations in program implementation and outcomes. This approach also helps to build baseline data for quantitative evaluations when it is not possible to conduct a baseline survey – for example, when the information about prior projects is missing or nonexistent.
Many evaluations are carried out towards the end of the program and do not have very reliable information on project conditions or similar projects or comparative target populations at the time the program was initiated. This makes it difficult to determine whether the differences observed at the end of the project can be attributed to program effects or whether these differences could be due, at least in part, to concurrent programs or factors that affect the target population. Furthermore, there is an increased risk in this case of incorrectly accounting for the initial status of the evaluated parameters.
Another benefit of the mixed methods approach is strengthening the representativeness of exhaustive qualitative studies (i.e., by linking case study selection with the quantitative sampling framework) by facilitating the comparison of qualitative findings with quantitative survey data. This can provide some much needed insight into the validity and value of both kinds of data (qualitative and quantitative).
The importance of Mixed Methods courtesy of freshspectrum.com
b). The Metric Definition Process
Defining metrics for creative placemaking projects is a process where experience, common sense, and context come together. This process should be considered as iterative to some extent, as it is extremely complicated to arrive at concretely defined metrics that will be useful for the entire development period of placemaking projects. With that said, there are certain preconditions that can and should be considered at the onset before defining the evaluation metrics of a placemaking project. The first one is to create indicators that depend directly on the objectives of the project. These indicators have to be related to the context, objectives, team, instrument, and possibilities of changes in the scope of the evaluation. It may prove tempting to utilize a placemaking project as an opportunity to evaluate and measure a wide variety of observable social and economic trends impacting a given context. However, lack of focus when deciding on indicators will dilute the specific aims and objectives of both the creative placemaking project and associated evaluation process.
The second one is to stay focused on the fundamental purpose of any indicators, which is to be helpful. Therefore, it’s important to create indicators before we have the results and not after them, so we can bias our estimators by using the indicators that are showcased most frequently by the results. This can happen because the person in charge of evaluating the program is the same that executed the program, so a negative evaluation will have a direct effect on his work.
The third precondition is to create indicators that are objective, neutral, and non-interpretable. The whole team has to share the same notion of what one wants to measure and how to do it, without any further interpretations. For this purpose it is necessary to discuss the indicators within the team. The question that we should ask ourselves is: Does measurement of this indicator allows us to know the achievement of a specific objective?
The fourth precondition is that the indicator must be able to be checked as easily as possible, so as to ensure its value to the team. Very interesting and exciting indicators that are difficult to measure fail to offer any real utility. Furthermore, the evaluation of a given indicator has to be within the budget and must be trustworthy. Lastly, indicators must always give the same result when measured more than once in the same context and project. And, in any case, any change in the indicator must correspond to a change in the variable that it measures.
The fifth precondition is that, to the greatest degree of predictability, indicators must be sensitive to small variations in the context over the course of the evaluation period (barring unforeseen events altering the course of a placemaking endeavor). In other words, any variables that can be reasonably expected to affect a creative placemaking project (e.g., anticipated changes to arts programming, seasonal access issues, etc.) should be accounted for when establishing indicators. Moreover, indicators should not be developed in a vacuum, but rather comparable to indicators used in other projects. These steps will lead to indicators verifiable by agents external to the project, thus ensuring the objectivity and useful applicability of evaluation results to other contexts and studies.
As a final addition to this point and signal to best practices, I recommend the SMART (Specific, Measurable, Attainable, Relevant, Timed) approach, which can be employed to create indicators for the definition of the program or project.
Courtesy of KPI.com
Many challenges accompany attempts to assess and measure the impact of cultural placemaking projects. However, through focus, practice, and dedication to the evaluation process, the insights and objective scientific data generated by such research can help determine whether or not creative placemaking endeavors actually yield successful benefits and outcomes. This makes all the difference in demonstrating how arts and culture matter, substantively and specifically, to the communities of people they represent and serve.
1. Guia para la gestión de proyectos Culturales, MInisterio de Cultura, Chile. http://www.cultura.gob.cl/wp-content/uploads/2013/04/guia-para-la-gestion-de-proyectos-culturales.pdf
2. La evaluación de impacto en la práctica, Banco Mundial. http://siteresources.worldbank.org/INTHDOFFICE/Resources/IEP_SPANISH_FINAL_110628.pdf
3. Guidelines for measuring cultural Participation, UNESCO (2006). http://www.uis.unesco.org/Library/Documents/culpart06.pdf