Stay Informed

CIRES | Education and Outreach

made with by CIRES IT

Evaluation 101

Evaluation 101

Evaluation is a “systematic investigation of the worth or merit of an object.” NSF evaluation handbook 


Why should you evaluate your project or program?

Evaluation will:

  • Monitor quality
  • Gather evidence on impact
  • Document results and findings
  • Provide publishable data

The evaluation process has many components, not all are relevant for each project.


Defining project goals and objectives

It is critical for the success of a project to articulate and carefully craft the goals and objectives for each project, align the goals with the activities and identify ways to measure the outcomes.

Goals should be SMART:

Eval_SMART.jpg


Logic Model

“Systematic and visual way to present and share your understanding of relationships among the resources you have to operate your program, the activities you plan, and the changes or results you hope to achieve. “ W.K. Kellogg Foundation 2004

Eval_LogicModel.png

Alignment between project goals, activities and anticipated impact is critical to ensure project success. Logic models are graphical representations that support this alignment process. Logic models usually have the following components:

  • Inputs - What new and existing resources will be used to support the project? (e.g., funding, project team, partnerships)
  • Activities - What are the main things the project will do? (e.g., develop curriculum materials, develop teacher professional development workshop)
  • Outputs - What products will be created? (e.g., lesson plan, video, museum exhibit)
  • Short-term Outcomes - What will occur as a direct result of the activities and outputs? (typically, changes in knowledge, skills, attitudes) (e.g., teachers increase their understanding of climate change)
  • Mid-term Outcomes - What results should follow from the initial outcomes? (typically, changes in behavior, policies, practice) (e.g., teachers change their teaching approach)
  • Long-term Outcomes - What results should follow from the initial outcomes? (typically, changes in broader conditions) (e.g., more students from underserved minorities will join the STEM workforce)

Eval_Evaluate.png

Find more information and logic model templates from the Evaluate project here.

 


Below is an example evalation matrix.

Project Goal

Measurable Objective

Possible Method

Instruments

Reach the intended audience

Audience reflects target number and demographic

Observation and head counts, pre-survey

Observation protocol and rubric, survey questions

Increase awareness of climate action networks

Will list 2-3 local climate action groups

Exit slip, request more info

Exit slip, request sheets

Increase climate science knowledge

Attendees will correctly answer climate science questions

Pre-post Surveys

Survey questions

Increase emotional connection to climate

Will exhibit affective responses during performance

Observations, exit interview

Observation protocol and rubric

Increase self-reported willingness to take sustainable actions

Attendees pledge sustainable behavior

Exit interview, Analysis of pledges

Interview protocol, pledge coding scheming

Increase sustainable behaviors and skills

Decreased fossil fuel usage

Utility records

Record analysis protocol

Increase participation in climate networks

Repeated attendance at climate events

Track attendance at future events, logins

Record analysis protocol


Theory of Change

A theory of change is defined by Connolly & Seymore (2015)  as a “predictive assumption about the relationship between desired changes and the actions that may produce those changes. Putting it another way: If I do x, then I expect y to occur, and for these reasons.”

Developing a theory of change for each project will help all project members connect with the purpose and intended outcomes of project activities. Developing a theory of change also facilitates the team to describe how each activity contributes towards reaching the overall goals. The development of a theory of change is supported by the literature and knowledge that already exists about effectiveness of project components. Reviewing the existing literature before the start of the program ensures that best practices are followed, and lessons learned from others are being considered.

Read more about Theory of Change here.


Evaluation Questions

Evaluation questions are developed at the proposal stage to guide the evaluation plan. Evaluation plans are usually aligned with project activities and outcomes.

Example of evaluation questions for a Research Experience for Undergraduate (REU) program:

Eval_REU.png


Evaluation Matrix

An evaluation matrix aligns each project goal and objective with activities and the measures that will be used to evaluate the effectiveness of all project activities.

Read more about Evaluation Matrices here 

Eval_Matrix.png


Evaluation Plan

The evaluation plan includes the formative and summative evaluation components. The difference between formative and summative evaluation is illustrated in the graphic from Steve Wheeler’s blog:

 Eval_SteveWEBBER.jpg

 

An evaluation plan is guided by the project’s logic model, theory of change or evaluation matrix and different for each project. The graphic below summarizes the components of an evaluation plan.

Evaluation activities typically include

  • Surveys
  • Interviews
  • Focus groups
  • Observations
  • Pretest – posttest knowledge/attitude/behavior assessment
  • Web analytics
  • Tracking

Eval_FormSum.png

Front-end Evaluation - Evaluation at the project start to inform the work

  • Identify what team needs to know in order to design the program for maximum effectiveness.
  • Design and conduct needs assessments, pre-program surveys, literature review, landscape study (e.g., interview stakeholders)

 

Formative Evaluation – Evaluation that guides the leadership team during a project and allows for iterative design

  • How can the program be fine-tuned during its lifetime to make it more effective?
  • Typical formative evaluation approaches include:

    • Feedback surveys of participants
    • Website analytics
    • Interviews with participants, team members or stakeholders
    • Usability studies of deliverables
  • Formative evaluation findings are summarized in reports to the project leadership, including a summary of recommendations for improvement of future implementation

 

Summative Evaluation – Evaluation that summarizes the accomplishments of the project

  • How well did the team achieve the project goals and objectives?
  • Typical summative evaluation approaches include:

    • Longitudinal summary of all evaluation findings across the project duration
    • Interview stakeholders to assess the project’s impact
    • Triangulation between different data sets
    • Report on project deliverables
  • Summative evaluation findings are shared through

    • Achievement reports organized by goals and objectives
    • Contributions to journal articles or presentation
    • Data compilation to use in future funding applications
  • Summative evaluation reports include recommendations for future project implementations

External Critical Review – “critical friend” to a project team to review project design and implementation

CIRES evaluation and research team can act as a critical friend to a project team to review all project design components including but not limited to i) theoretical framework, ii) logic model, iii) research and evaluation instruments, iv) research design plan, v) program implementation plan, vi) participant recruitment plan, v) dissemination plan, vi) review of deliverables.


Institutional Review Board

Evaluation often requires approval from the institutional review board (IRB) of the institution that performs the study. The IRB board reviews all studies that include human subjects. Regulations differ by institution, if working with CIRES, our evaluation team will request the IRB approval. IRB boards will usually rate evaluation studies as exempt (or even as non-research), requiring minimal oversight of the evaluation process. The IRB board ensures that data is de-identified before sharing, protect the rights and privacy of study participants, and holds the evaluation team to a high standard in data management and storage.

Does my project require institutional review board review?


Outline for evaluating workshops

Outline for evaluating websites