Partnering with Oregon Humanities, my team is proposing our very own virtual Conversation Project and series of follow up surveys for our Environmental Engagement project. Our theoretical conversation would take place in three different regions of Oregon, exploring how individuals are connected to land and discussing what proper land management looks like. Afterwards, we would distribute a series of surveys to assess the impacts of our conversation over time and regional differences in thought. To learn more information about our proposal, please look at my team’s “What,” “Who,” and “How” posts.
In thinking practically about our project’s implementation, we must consider how we will assess our work and gather feedback from participants about our performance and Conversation Project. Through assessments, we can take a critical look at what we have done, learn from our mistakes, celebrate our successes, and grow as engagers in the future.
Our summative assessment, which gauges whether we managed to reach our goals in completing our engagement project, is already implemented into our project in the form of post-conversation surveys. These surveys will assess the long term impacts of our Conversation Project, asking participants if and how their opinions have changed since the conversation and whether they have taken further action on the topic discussed. With the results from these surveys, we will be able to understand how people think differently based on their location, the lasting effects of dialogical engagement, and specifically, the effectiveness of the Conversation Project we created and facilitated.
For a formative assessment, which gauges how we can complete our goals in the future, we could ask participants to fill out two other surveys on top of the post-conversation surveys asking for general feedback. The first survey would be distributed directly after the Conversation Project ends, and would ask participants to rate the content and creativity of the conversation, preparedness and openness of the facilitators, how the Conversation Project was advertised, and how much they enjoyed and learned from the experience on a numerical scale. Moreover, it would ask participants to please provide feedback on how we could have improved the experience. The second survey would be distributed along with the final survey six months after the Conversation Project, asking participants to rate the ease, quality, and content of our surveys on a numerical scale. It would also ask participants if the amount of surveys distributed was fair and not overwhelming. Our group, after reading over the ratings and feedback on both surveys, would discuss what we could have done differently and what elements of our project we would keep for future endeavors.
Another way we can formatively assess our progress is through counting how many participants have taken the surveys. We would expect that the amount of participants filling out the survey will lessen over time as the surveys become further apart from the Conversation Project. However, in tracking how many participants filled out the surveys in the beginning versus the end of our engagement project, we could potentially discover a cutting off point where too much time has gone past and not enough people are continuing to participate. Doing this could show if asking the public to participate in a long-term project, even if their participation is not time-consuming, is a realistic expectation. Therefore, our assessment of participant retention would influence how we go about future engagement projects.