Skip to content

Academic Program Assessment Glossary

UC Merced Academic Program Assessment Glossary 

The UC Merced Glossary of Assessment Terms is designed to facilitate understanding by providing a common language for discussions. The terms provided should not be considered an exhaustive list. The glossary provides broad and general descriptions designed to define terms across academic disciplines. It is understandable that a specific discipline (e.g., Psychology, History) may define terms in a different manner.

The list is alphabetical. Click on the letter below to drop the relevant section.

A l B l C l D l E l G l I l L l P l R l S l V | W


A


Action Research

Action research is an effective inquiry-based method for promoting collaboration between researchers and program practitioners in the analysis and subsequent improvement of academic program outcomes and processes. Action research provides a constructive framework for ensuring that critical information is used by key stakeholders to implement data-driven interventions for continuous academic improvement (Hansen & Borden, 2006).

Alignment

Alignment is the connection between learning outcomes, learning activities and assessment. An aligned course means that the learning outcomes, activities and assessments match up, so students learn what the course intends and accurately assesses what students are learning.

Assessment

Assessment is the ongoing process of:

  1. Establishing clear, measureable expected outcomes of student learning.
  2. Ensuring that students have sufficient opportunities to achieve those outcomes.
  3. Systematically gathering, analyzing, and interpreting evidence to determine how well student learning matches our expectations.
  4. Using the resulting information to understand and improve student learning.
     (Suskie, 2004)

B


Benchmarking

An actual measurement of group performance against an established standard at defined points along the path toward the standard. Subsequent measurements of group performance use the benchmarks to measure progress toward achievement (New Horizons for Learning). For more information visit What are benchmarks? How are they determined?


C


Closing the Loop / Assessment Cycle

Commitment to using assessment results to inform improvements; the unit presents evidence that assessment results, including student learning assessment, are routinely used for institutional improvement, effectiveness and planning.

Criteria

A standard of judgment, a rule or principle for evaluating or testing something. 

Curriculum Map

A curriculum map helps instructors and students visualize the organization of a degree program in support of intended program-level learning outcomes. In doing so, curriculum maps also describe the coherence of a program’s curriculum – how courses, and other learning experiences, work together to strategically support intended student learning. Some important features of curriculum maps are that they emphasize interdisciplinary connections, promote essential skills and link content information across different courses. See UC Merced curriculum map guidelines and template.

D


Direct Evidence

Direct assessment is when measures of learning are based on student performance or demonstrate the learning itself.  Scoring performance on tests, term papers, or the execution of lab skills, would all be examples of direct assessment of learning. Direct assessment of learning can occur within a course (e.g., performance on a series of tests) or could occur across courses or years (e.g., comparing writing samples from seniors over the last few years). Other examples of direct evidence include: capstone experience, thesis, curriculum map, lab reports, reflective essays when graded, and portfolios, etc. (see assessment specialists for more examples)

E


Evaluation

The use of assessment findings (evidence/data) to judge program effectiveness; used as a basis for making decisions about program changes or improvement (Allen, Noel, Rienzi & McMillin, 2002).


G


Goals

Describes a broad learning outcome or concept (i.e., what you want students to learn or a unit to achieve) expressed in general terms. Learning goals are generally included in the course description of the syllabus.  Program goals pertain to the entire program, located within the catalog.

Example goals include Problem-solving skills or Providing high quality, cost-effective healthcare for students. However, learning outcomes are specific, observable, and measurable knowledge or skill that the student gains/develops as a result of a specific course. Thus, these outcomes are clearly stated in the course syllabus. There are three categories of student learning outcomes.

  • COGNITIVE OUTCOME What students KNOW; knowledge, comprehension, application, analysis, synthesis, & evaluation.
  • AFFECTIVE OUTCOME What students CARE ABOUT; students' feelings, attitudes, interests, and preferences.
  • PERFORMANCE OUTCOME What students CAN DO; skilled performance, production of something new (e.g., a paper, project, piece of artwork), critical thinking skills (e.g., analysis and evaluation). 

I


Indirect Evidence

Proxy signs of student learning, including students, alumni, or others’ perceptions of their learning and factors that influence student learning outcomes, like the student experience.

  • Focus Groups: a group selected for its relevance to an evaluation that is engaged by a trained facilitator in a series of discussions designed for sharing insights, ideas, and observations on a topic of concern to the evaluation (National Science Foundation, 2010).
  • Interviews occur when researchers ask one or more participants general, open-ended questions and records their answers (Creswell, 2008).
  • Reflective Essays are generally brief (five to ten minute) essays on topics related to identify learning outcomes, although they may be longer when assigned as homework. Students are asked to reflect on a selected issue. Content analysis is used to analyze results.
  • Surveying is a method of collecting information from people about their characteristics, behaviors, attitudes, or perceptions. Surveys most often take the form of questionnaires or structured interviews (Palomba & Banta, 1999). General definition: an attempt to estimate the opinions, characteristics, or behaviors of a particular population by investigation of a representative sample.
  • See assessment specialists for more examples.

L


Learning Outcomes

Learning Outcomes are statements that articulate the intellectual abilities, knowledge, or values/attitudes that students should demonstrably possess as a result of a given learning experience. 

  • Bloom’s Taxonomy: a classification of educational goals and objectives created by a group of educators led by Benjamin Bloom. They identified three areas of learning objectives (domains): cognitive, affective and psychomotor. The cognitive domain is broken into six areas from less to more complex. The taxonomy may be used as a starting to point to help one develop learning outcomes.
    Six levels arranged in order of increasing complexity (1=low, 6=high):
    1. Knowledge: Recalling or remembering information without necessarily understanding it. Includes behaviors such as describing, listing, identifying, and labeling.
    2. Comprehension: Understanding learned material and includes behaviors such as explaining, discussing, and interpreting.
    3. Application: The ability to put ideas and concepts to work in solving problems. It includes behaviors such as demonstrating, showing, and making use of information.
    4. Analysis: Breaking down information into its component parts to see interrelationships and ideas. Related behaviors include differentiating, comparing, and categorizing.
    5. Synthesis: The ability to put parts together to form something original. It involves using creativity to compose or design something new.
    6. Evaluation: Judging the value of evidence based on definite criteria. Behaviors related to evaluation include: concluding, criticizing, prioritizing, and recommending (Bloom, 1956).
  • Course Learning OutcomeCourse Learning Outcomes (CLOs) are statements describing the intellectual abilities, knowledge, and/or values or attitudes that students should demonstrably possess at the end of a course. CLOs support student learning in multiple ways. First, they provide instructors with a framework for designing a course, including content, assignments, assessments, and instructional strategies.  Second, when explicitly linked to assignments and assessments, CLOs also provide students with a learning-based rationale for the work they are asked to do as well as a reference point for monitoring their own learning, thereby supporting engagement and motivation. Third, CLOs provide a reference point for instructors and students to “research” student learning, yielding insights into student abilities relevant to both current and future offerings of the course. Finally, CLOs facilitate the development of a coherent, developmentally organized, programmatic curriculum, that as WASC puts it, is “more than simply an accumulation of courses or credits,” by allowing faculty to specify a course’s contribution to the program’s intended learning outcomes (PLOs), and to connect the course to the learning taking place in the courses that precede and follow it. When connections between CLOs and PLOs are explicitly communicated in syllabi and curriculum maps, students and instructors alike are able to develop a more holistic view of the major. In short, and as reflected in UC Merced’s mission, learning outcomes underpin a “student-centered” approach to education. (see UC Merced senate document for details). 
  • Program Learning Outcome (PLOs) are intended to describe the intellectual abilities, knowledge and values that students should demonstrably possess at graduation, as a result of a cohesive and coherent degree program that, as WASC puts it, is “more than simply an accumulation of courses or credits" (see UC Merced senate document for details). Find UC Merced Academic Program Learning Outcomes in the course catalog.
  • Specify what students will know, be able to do, or be able to demonstrate when they have completed or participated in academic program(s) leading to certification or a degree. Outcomes are often expressed as knowledge, skills, attitudes, behaviors, or values. A multiple methods approach is recommended to assess student learning outcomes indirectly and directly. Direct measures of student learning require students to demonstrate their knowledge and skills. They provide tangible, visible and self-explanatory evidence of what students have and have not learned as a result of a course, program, or activity (Suskie, 2009; Palomba & Banta, 1999).

P


Program Review

The periodic peer review of the effectiveness of an educational degree program encompasses student learning and assessment resources. Academic program review is predicated on the idea of expert evaluation. Academic programs, combining cutting edge research with teaching, are far too complicated to be evaluated by simple measures; each program must be evaluated by peers whose knowledge of the fields of inquiry and education enable them to identify programmatic strengths, weaknesses, and opportunities.
UC Merced Academic Program Review involves the following processes within the senate policy.
  1. PROC (the Periodic Review Overview Committee) establishes the scopes of the review
  2. A Self-Study is developed by the unit using the senate guidelines and data reports provided by Institutional Research and Decision Support (IRDS).
  3. The review team visit (composed of external experts) typically encompasses 2 ½ days and includes meetings with the dean, the chair, faculty, staff, students, and alumni.
  4. The final written report is transmitted to PROC for error check, and disseminated to program faculty and school dean.
  5. The program lead chair works with colleagues to draft a response to the report and sends this report to PROC by the following month.
  6. The program leadership is invited to report on progress in the unit since the review as well as to comment on the quality of the review process itself.

Program review is distinct from program evaluation which pertains to the nonprofit and p-12 sector.


R


Reliability

Reliable measures are measures that produce consistent responses over time.
Inter-rater reliability is a measure of reliability used to assess the degree to which different judges or raters agree in their assessment decisions. Inter-rater reliability is useful because human observers will not necessarily interpret answers the same way; raters may disagree as to how well certain responses or material demonstrate knowledge of the construct or skill being assessed.
Intra-rater reliability is a measure of reliability used to assess the degree to which a single rater's judgments agree across similar assessment decisions.

Rubric

Specific sets of criteria that clearly define for both student and teacher what a range of acceptable and unacceptable performance looks like. Criteria define descriptors of ability at each level of performance and assign values to each level. Levels referred to are proficiency levels which describe a continuum from excellent to unacceptable product (System for Adult Basic Education Support).
 
  • Value Rubrics: a set of rubrics developed by the American Association of Colleges and Universities (AAC&U) associated with the Valid Assessment of Learning In Undergraduate Education (VALUE). They are institutional level rubrics developed by teams intended for these to be a baseline that can be modified or adapted by institutions that chose to use them. The VALUE rubrics include the following topics:
  • Civic Engagement
  • Communication
  • Creative Thinking
  • Ethical Reasoning
  • Foundations and Skills for Lifelong Learning
  • Information Literacy
  • Inquiry and Analysis
  • Integrative Learning
  • Intercultural Knowledge and Competence
  • Oral Communication
  • Problem Solving
  • Quantitative Literacy
  • Reading
  • Teamwork

S


Sample Size

Sample size determination is the act of choosing the number of observations or replicates to include in a statistical sample. The sample size is an important feature of any empirical study in which the goal is to make inferences about a population from a sample. By increasing the sample size, validity is increased.  Therefore, smaller programs may want to consider collecting data from students over time (longitudinally), and/or from a larger proportion of their student population.

Signature Assignments

Assignments, usually at the course or program level, that measures whether the student the student learning outcomes have been achieved.  A ‘signature assignment’ is an assignment, task, activity, project or exam purposefully created or modified to collect evidence for a specific learning outcome or learning outcomes.  Signature assignments work well when they are course embedded; ideally other coursework builds toward the signature assignment.  They can be generic in task, problem, case or project, to allow for contextualization in different disciplines or course contexts. Assignment Library: A library of assignments for building, revising, and using assignments.

Standards

A level of accomplishment all students are expected to meet or exceed. Standards do not necessarily imply high quality learning; sometimes the level is a lowest common denominator. Nor do they imply complete standardization in a program; a common minimum level could be achieved by multiple pathways and demonstrated in various ways (Leskes, 2002).

Statistical Significance

Statistical Significance is a mathematical procedure for determining whether a null hypothesis can be rejected at a given alpha level. Tests of statistical significance play a large role in quantitative research designs but are frequently misinterpreted. The most common misinterpretation of the test of significance is to confuse statistical significance with the practical significance of the research results (Munoz as cited in Mathison, 2005).
While statistical significance is important in academic research, it will rarely pertain to assessment endeavors. Assessment is not testing for the probability that a result occurred by chance, or comparing to a null hypothesis; we are interested ultimately in whether the distribution of student performance is meeting our desired benchmarks, and the contributing context as demonstrated in indirect evidence.

 

T


Triangulation

Triangulation is the process of corroborating evidence from different individuals (e.g., an instructor and student), types of data (e.g., course-embedded assignments, theses, presentations, or methods of data collection (e.g., documents, surveys, focus groups,  interviews) or descriptions and themes in qualitative research (Creswell, 2008).

Triangulation is used in assessment in order to obtain a broader picture of student learning, including contextual factors that may explain the observed performance. We achieve triangulation by using both direct and indirect evidence in our assessment practices. The idea is that one can be more confident with a result if different methods lead to the same result.


V


Validity

Validity means that researchers can draw meaningful and justifiable inferences from scores about a sample or population (Creswell, 2002). It is the extent to which an assessment measures what it is supposed to measure, and the extent to which inferences and actions made on the basis of assessment results are appropriate and accurate (CRESST, 2011).

When applied to assessment, validity is used to assess how well a measure is able to provide information to help improve the program under study; whether it provides an accurate picture of student learning.

Assessment results may be invalid for the basis of decision making when one of the following conditions occur:

  • small sample size relative to the student population
  • using an assignment for assessment that is not in the standard or required curriculum
  • if sampling students not in the major
  • programmatic criteria not used to evaluate student work
  • have not triangulated by using more than one type of evidence
  • the evidence does not measure the expectations of PLO or criteria
  • having low inter-rater reliability

W


WASC (WSCUC) Core Competencies

WASC redesigned the reaccreditation process in 2013, changing both the substance of the review and the review process itself.  Among several new accreditation expectations is that institutions must ensure the development of the following “five core competencies” in all baccalaureate programs:
  • Written communication
  • Oral communication
  • Quantitative reasoning
  • Information literacy
  • Critical thinking
For more information, please visit WASC Core Competencies page. 
 
The following link http://assessment.ucmerced.edu/academic/wscuc-core-competencies provides information and resources for integrating the assessment of the five core competencies into the established annual PLO assessment practices of undergraduate majors at UC Merced. 
 
Adapted from the following sources:
American Public University System (APUS) (2015) Glossary of Common Assessment Terms. http://www.apus.edu/community-scholars/learning-outcomes-assessment/univ...
Association of American Colleges and Universities: Beyond Confusion an Assessment Glossary http://www.aacu.org/publications-research/periodicals/beyond-confusion-a...
Carnegie Mellon’s Eberly Center for Teaching Excellence – Common Assessment Terms http://www.cmu.edu/teaching/assessment/basics/glossary.html
Indiana University Purdue University, Indianapolis (IUPIU) Program Review Assessment Committee (PRAC) senate.ucmerced.edu/sites/senate.ucmerced.edu/files/documents/undergraduate_clo_plo_guidelines_final_may_2012.pdf
 
Works Cited
Bloom, B. (1956). Taxonomy of educational objectives: the classification of educational goals. Handbook I: Cognitive Domain. White Plains, NY: Longman.
 
CRESST. (2011, September 18). CRESST. Retrieved from CRESST: http://cresst.org
 
Creswell, J. W. (2002). Educational Research: Planning, conducting, and evaluating quantitative and qualitative research. Upper Saddle River, NJ: Merrill Prentice Hall.
 
Creswell, J. W. (2008). Educational research: Planning, conducting, and evaluating quantitative and qualitative research (3rd ed.). Upper Saddle River, NJ: Pearson Prentice Hall.
 
Hansen, M. J. (2006). Using action research to support academic program improvement. New Directions for Institutional Research(130), 47-62.
 
Leskes, A. (2002). Beyond Confusion: An assessment glossary. Association of American Colleges and Universities, Winter/Spring(2002).
 
M. Allen, R. N. (2002). Outcomes Assessment Handbook. Long Beach, CA: California State University.
 
Munoz, M. A. (2005). Statistical Significance. In M. A. Munoz, In Encyclopedia of evaluation (p. 390). Thousand Oaks, CA: Thousand Oaks.
 
National Science Foundation. (2010, December 1). The 2010 User-Friendly Handbook for Project Evaluation. Retrieved November 1, 2016, from Purdue University: https://www.purdue.edu/research/docs/pdf/2010NSFuser-friendlyhandbookforprojectevaluation.pdf
 
New Horizons For Learning. (2002, September 1). Glossary of Assessment Terms. Retrieved November 29, 2016, from New Horizons for Learning: http://www.newhorizons.org
 
Palomba, C. A. (1999). Assessment essentials: Planning, implementing, and improving assessment in higher education. San Francisco, CA: Jossey-Bass.
 
Suskie, L. (2004). Assessing student learning: A common sense guide. Bolton, MA: Anker Publishing Company.
 
Suskie, L. (2009). Assessing student learning: A common sense guide. San Francisco, CA: Jossey-Bass.
 
System for Adult Basic Education Support. (n.d.). Glossary of Useful Terms. Retrieved November 29, 2016, from American Public University System: http://www.apus.edu/community-scholars/learning-outcomes-assessment/univ...