Overview of Annual Academic Program Assessment

Why We Do It

Annual program assessment is intended to promote regular engagement in evidence-based, program planning in support of the faculty's goals for student learning and success. Ongoing attention to student achievement and related learning needs is particularly important for a new and growing institution. It is also essential to realizing and documenting our goals for excellence in undergraduate and graduate education.

At the undergraduate level, evidence of student learning and the student experience has supported program-level decision making in numerous ways over the last several years. Faculties have responded to assessment findings by adjusting pre-requisites, re-sequencing program curriculum, re-focusing assignments, adopting new pedagogies and developing new courses. Most importantly, perhaps, annual assessment has promoted faculty conversations around student learning and teaching, facilitating a more cohesive vision of our programs' educational intentions.

At the graduate level, program-level assessment continues to grow in concert with the expansion of our graduate offerings.

How We Do It

The exact methods for assessing student learning are determined by the program's faculty, in keeping with the faculty's ownership of the curriculum. The overarching goals for programs, however, are the same. These include:

  • gathering evidence that yields actionable insights into student learning achievement and the student experience in relation to the expected program learning outcomes. The most effective strategies involve complementary lines of direct and indirect evidence designed to represent the cumulative impact of the program's curriculum on student learning and success at a given point in the degree.
  • using broadly shared, programmatic criteria and standards to evaluate the evidence. Usually elaborated in the form of a rubric, the criteria and standards describe what students are able to do at the time of graduation if they have achieved a given program learning outcome.
  • discussing results as a faculty and other stakeholders as appropriate, including graduate student instructors in the program.
  • identifying and implementing actions to improve student learning and/or the student experience, as appropriate, recognizing that the action of making no change is also possible.

More specific descriptions of expectations for effective assessment practices and use of results are detailed in this rubric. Our goal is for all programs to consistently practice a "effectively fostering improvement of program-level student learning" level of assessment. We are moving toward that goal, recognizing that each program learning outcome poses its own unique assessment challenges.

To date, programs have used a rich variety of evidence. Examples of direct evidence for assessing undergraduate majors and minors include senior theses, embedded exam questions, portfolios (electronic and paper), papers and reports and ETS subject tests.

Example forms of indirect evidence include reflective writings within portfolios, program-specific surveys, embedded questions on the graduating senior survey, data from institutional surveys, careful review of syllabi, curriculum and curriculum maps, graduate student instructor (GSI) interviews and student interviews conducted by the Students Assessing Teaching Learning Program (SATAL).  

When We Do It

Every academic program summarizes is assessment activities annually and submits a report with the support of staff assessment coordinators. Reports are submitted to the school dean and then to the Periodic Review Oversight Committee.