Program Evaluation Part 1 – My Evaluation Experience

happy sad keyboard faces

Introduction

Evaluations are part of our everyday lives. And yet, so few know and implement program and project evaluations in a logical and meaningful way. This four-part blog series aims to expand our collective understanding on the definitions, kinds, and implementations of evaluation and evaluation research. This part focuses on my evaluation experiences.

The Kinds of Evaluation Activities With Which Have I Been Involved

In the midst of taking both program evaluation classes at the University of New Mexico simultaneously, I've realized that I have been involved in many different kinds of evaluation activities. Russ-Eft and Preskill (2009) categorize evaluation activities as:

1) developmental, which engages stakeholders in long-term relationships and helps design and modify programs, products, etc.;

2) formative, which is program improvement oriented and is ongoing and iterative; and

3) summative, which is intended to give a final judgement on the program in terms of its merit, worth, or value (pp. 18-19).

Summative evaluation includes monitoring and auditing, outcome evaluation, impact evaluation, and performance measurement. From my earliest memories, I have been involved with, either informally or formally, several forms of summative evaluation, including outcome evaluation as a regular part of my schooling. This outcome evaluation was accomplished through mostly the use of reflection (namely, was that class period helpful in my learning?), surveys, or standardized testing. In graduate school, I was specifically involved in a program review which used both performance measurement and outcome evaluation methods to evaluate the NASA Pursue Project. In these evaluations, we were required to justify that our usage of the grant funds NASA had provided to improve our general chemistry sequence at UNM were actually resulting in the outcomes we had specified. Our evaluation was also formative within this project and the use of iterative, ongoing evaluation lead NASA to fund the project for several years after the initial summative evaluation was performed.

In my teaching and professional career, I've been involved in all three kinds of evaluation activities. We performed an informal developmental evaluation on Arts and Sciences within a year or two after I was hired. This process involved generating and answering many questions and plans, including what kinds of activities and strategies we were going to implement, what goals we would try to reach over the next five years, what practices needed to be in place to make implementation successful, and what criteria or standards we would judge the program against. This evaluation was part of a "revisioning effort" to try and reimagine what Arts and Sciences could be to not only faculty and staff but also, our most important stakeholders- our students. It formed a foundation for rethinking the status quo and continues to inform our present assessment efforts.

More recently, I've been involved in class and program assessment efforts for the general education courses offered by the CNM Chemistry Department as well as the A.S. (Associate in Science) in Chemistry. These evaluation processes are mostly formative, with more summative evaluations due every three years. I'm also involved in the CNM school wide accreditation efforts and, in my experience, I resonated with the notion that accreditation "assumes experts know what is good" (Russ-Eft & Preskill, 2009, p. 54) and that assumption is not always true. While accreditation experts may know what works at their institutions, often their expertise doesn't transfer directly to CNM because our population has different work and family demands as well as different expectations of their schooling process.

Within my professional life, I regularly review articles for the Journal of Chemical Education and for Chemistry Education Research and Practice, which is a bit of an impact evaluation process as we're not only looking for clear communication of what was done but we're also looking for the impact of this paper in terms of the chemistry community by asking if the paper will move the literature in a forward or novel direction. I wish reviewers and authors were able to be more conversational and open, however, allowing for a more formative review process. I also wish this were true for the many NSF review panels I have done, in which both our individual and panel reviews feel summative. I wish the funding process was more open, more iterative, and more formative because I think the process would then be mostly transformative instead of disheartening to potential principle investigators (PIs).

Strengths That Every Evaluation Process Should Encourage

The three major strengths I have seen in the evaluations either I conducted or others have conducted in which I was a participant are:

1) securing proper funding, evaluator time, and stakeholder buy-in in advance of the beginning of the evaluation (especially for large scale evaluations);

2) communicating updates in a regular, ongoing ways to all stakeholders; and

3) leveraging the evaluation design to secure implementation.

Of course, evaluations can be accomplished without considering or implementing the needed steps to secure these strengths but, unless the evaluations are informal, I have not seen a great deal of success result from those efforts. I believe these strengths allow for evaluative processes that are more open and transparent, and therefore are more likely to be implemented effectively.

Problems I Have Seen in Conducting Evaluations

The problems I have observed in evaluation processes are multi-fold. These problems include:

1) not enough time spent with the problem before starting to ideate possible solutions;

2) not enough buy-in from relevant stakeholders;

3) no communication in the midst of the evaluation process, only at the end;

4) lack of implementation of evaluation findings;

5) lack of funding; and

6) a continual and ongoing lack of closing the loop and follow-through.

There are many other potential problems, but I see these as the most egregious as they can really morph a potentially excellent, easily implemented evaluation into one that endlessly sits on a shelf.

References

Russ-Eft, D. & Preskill, H. (2009). Evaluation in organizations: A systematic approach to enhancing learning, performance, and change (2nd ed). Basic Books.  

Editor's note: Part 2 of this series can be accessed by clicking here.

Community: