Skip to Main Content

Assessment Resources

Pratt Academic Assessment Overview

Introduction

Academic assessment at Pratt is an iterative process where faculty play a central role in developing, reviewing, and assessing curriculum, pedagogy, and student learning. This page outlines how to get started with assessment, including understanding what assessment is, how it is conducted, and why it is essential. It concludes with frequently asked questions. Additional sections of the assessment LibGuide provide resources to support ongoing learning in assessment. For more information, feel free to review the Academic Assessment Policy.

Circle with the letters P A R I. P Plan for outcomes. I Implement assessment plan. R report analyzed data. A act on data informed progress.

GETTING STARTED

 

What is assessment?

Academic assessment in higher education refers to the process through which institutions evaluate and measure students' learning, academic achievements, and competencies. This process includes various evaluation methods, such as major assignments (e.g., projects, portfolios, exams), to determine the extent to which students have met the learning outcomes or objectives of a course or program. Assessment is essential because "learning needs an assessment ‘event’ to establish the scope and depth of what has been learned" (Grainger & Weir, 2020, 9).

 

Why do we assess?

The purpose of academic assessment is not only to gauge student progress but also to inform teaching practices, improve educational quality, and ensure that educational standards are met. Assessment in higher education is a complex and ongoing process that goes beyond assigning grades. It involves gathering data, providing feedback, and making adjustments to teaching and learning, as well as improving curriculum and programs. When done effectively, assessment helps ensure that students meet course objectives and develop the skills and knowledge needed for future academic and professional success. By balancing various assessment types, aligning them with learning goals, and using technology to improve efficiency and accessibility, institutions can create an effective learning environment for all students.

At Pratt, academic assessment involves both institutional and program-level evaluation of student learning. This is achieved by defining Student Learning Outcomes (SLOs) and identifying how students demonstrate those outcomes. Programs are tasked with providing evidence of student learning through multiple methods:

  • Individual student learning assessment: Provides insights into individual student performance, supporting personal growth.

  • Course-level learning assessment: Measures student learning within a specific course, aiding instructors in improving teaching practices.

  • Program-level learning assessment: Assesses student learning across an entire program, helping faculty make improvements for better student outcomes.

 

Who should be involved in academic assessment?

The assessment process involves faculty, administrators, students, and stakeholders such as alumni and industry partners. Faculty play a pivotal role in ensuring that students demonstrate the program's learning outcomes, while also contributing to a process of continuous improvement when students fall short of learning achievement targets. Assessment also helps meet state, federal, and accreditation guidelines. It is beneficial to involve various stakeholders (e.g., faculty, students, alumni, employers) in the assessment process to ensure a comprehensive understanding of student learning.

HOW DO WE ASSESS?

Effective assessment at Pratt is collaborative and should actively involve those who design and deliver learning opportunities, as well as the recipients of these opportunities. Assessment is cyclical and includes the following elements:

  • Multi-Year Assessment Plan: Each program must maintain a multi-year assessment plan (spanning 3–6 years) that includes:
    • A timeline for assessment activities across the plan’s duration
    • Program learning outcomes and their alignment with institutional goals
      • Program learning outcomes are approved through the Institute’s Curricular Review Policy
      • They should show the relationship (i.e., “mapping”) between the curricula and the program learning outcomes as well as alignment to the institutional goals and outcomes
    • Assessment methods, data sources, and student feedback
    • Roles and responsibilities for assessment activities
    • Procedures for sharing results and developing action plans for improvement
    • A sustainable workload across the assessment cycle
  • Creation: The plan should specify what students should learn, where in the curriculum they will learn it, and how learning will be measured.
  • Implementation: Implement the plan by gathering formative and summative evidence of direct and indirect student learning across the program, ensuring alignment with clearly defined learning outcomes.
  • Reporting: After data collection, collaborate with stakeholders to analyze the evidence and produce an annual report with action plans for continuous improvement, thus closing the assessment loop.

 

When and how often do we assess?

Assessment is an ongoing process with activities occurring each year, informed by the multi-year assessment plan. The following timeline outlines key assessment activities:

  • Annual Assessment Plan: From the multi-year plan, academic programs create and implement annual assessments.

  • Annual Assessment Report (Due September 30): The report should include the implementation of the assessment methods, findings from the assessment, and how the results were used to improve student learning.

 

Suggested Assessment timeline 

  • August – September 
    • Review the multi-year plan and identify the Student Learning Outcomes (SLOs) and assessment methods for the upcoming year.
    • Determine data collection methods, storage, and aggregation procedures.
  • September – December  
    • Implement the plan and collect data 
  • January 
    • Implement spring semester portion of the plan
  • April - August
    • Analyze the data, identify conclusions, and discuss actions based on findings
  • September
    • Submit annual assessment summary report

ASSESSMENT ESSENTIALS

Student Learning Outcomes (SLOs)

When creating SLOs, consider the following:

  • What students will learn (knowledge, skills, attitudes, competencies)
  • How they will demonstrate their learning (use action verbs)
  • How to determine whether expectations have been met
  • Ensure that SLOs reflect higher levels of learning (e.g., “analyzing,” “evaluating,” “creating” – using Bloom’s Taxonomy)
  • SLOs should be culturally responsive and appropriate for the degree level

Tip: Monitor 3–4 outcomes for three consecutive years, then alternate the next set for the following three years. 

For additional help, see the Learning Outcomes section of the Assessment LibGuide.

 

Curriculum Maps

A curriculum map provides a visual overview of where students have the opportunity to learn each SLO, where evidence of learning is collected, and where interventions may be needed. Ensure that courses align with program and institutional learning outcomes, and that they scaffold learning appropriately across the program.

For additional help, see the Curriculum Mapping section of the Assessment LibGuide.

 

Assessment Methods

Assessment methods should allow students to demonstrate learning as specified by the SLOs. The methods should include a mix of assignment types (e.g., portfolios, capstone projects, performance evaluations), modes of expression (e.g., visual, written, oral), and both direct and indirect sources of evidence. Direct and indirect assessment as well as formative and summative assessments. Direct Assessment assesses students' direct work. Indirect assessment refers to self-reported data such as surveys, self-reflection, graduation rates, etc. Formative  assessments provide continuous feedback early in the learning process, while summative assessments evaluate student learning at the end of the course or program.

Things to consider: 

  • Methods have been selected or designed to demonstrate specified SLO
  • Methods include a mix of assignment types (e.g. portfolios, capstone assignments, research paper, performance) or modes of expression (e.g., visual, written, oral) 
  • Methods include a mix of direct and indirect sources of evidence. Methods includes student level target (70% percent of the students will
  • Methods include program level target (i.e., percent of students attaining the student level target)
  • Methods include a mix of formative and summative assessments (see explanation below)

Collecting and Using Data

Data should be prepared in a way that encourages productive discussion and decision-making. It should be disaggregated by individual SLOs, presented meaningfully (e.g., in tables or graphs), and periodically reported to appropriate stakeholders. Assessment is only as good as the conversations it initiates and the actions it spurs. Collected data should indicate: 

  • Course/Cohort 
  • Semester/Year 
  • Aggregation by group from courses or cohorts(e.g., tables, graphs, narrative themes) 
  • Disaggregated by each individual or groups

 

Making Use of Data 

It is vital to involve the community of stakeholders in the process of making sense of evidence and proposing improvement actions as needed. This helps mitigate bias by engaging parties with a variety of perspectives, experiences, and insights. 

  • Stakeholders are involved in making sense of data and proposing improvement actions 
    • Students (e.g., members of committee doing this work, focus group)
    • Faculty and staff members (e.g., curriculum committee meetings, department meetings, retreats)
    • Employers and alumni (e.g., advisory groups, surveys, focus groups) 
  • Consider whether one or more of the following may benefit from attention: 
    • Student learning outcomes 
    • Assessment methods
    • Rubrics/scoring guides 
    • Assignment prompts
    • Survey questions
    • Learning experiences 
    • Curriculum
    • Pedagogy (teaching methods) 
  • Program action plans are linked back to conclusions based on evidence of student learning
  • Timeline for plans to implement program improvement actions are specified  
  • Regularly look for evidence of impact of recent changes to report

 

Engaging Stakeholders

Involving stakeholders—such as students, faculty, alumni, and employers—helps make sense of data and proposed improvements. This ensures that the assessment process is objective and informed by diverse perspectives. Collaborating with students in this work offers advantages for both faculty and students. For students, it can lead to greater engagement, motivation, confidence, metacognitive skills, and a sense of responsibility for their learning. For faculty, the benefits include improved student engagement, a shift in perspective on teaching, a deeper understanding of learning from diverse viewpoints, and a more reflective and adaptive teaching approach. For example, during the reflective phase, rubrics can help students assess the quality of their work based on established goals, rather than comparing themselves to others (Grainger & Weir, 2020).

 

Assessment Platform

Coursedog is the Pratt Assessment platform. Canvas, the official Learning Management System (LMS), has rubric features which can assess student work as well as aggregate and disaggregate student learning data. Canvas integrates with Coursedog to streamline access of direct student data for reporting.

FREQUENTLY ASKED QUESTIONS

 

Can we use grades as academic assessment data?

GPAs, course grades, and assignment grades are not reliable indicators of whether students have achieved program SLOs. Debate among assessment practitioners include issues of variability, subjectivity, and the educational value of grading in this context (Innovative Assessment in Higher Education: A Handbook for Academic Practitioners, 2019). A better approach is to use rubrics that are directly tied to specific learning outcomes, ensuring that scores are based solely on relevant criteria.

 

Why do we need both direct and indirect measures of student learning?

Direct assessments (e.g., rubric-based evaluations) provide concrete data on student performance, while indirect assessments (e.g., surveys, focus groups) capture students’ perceptions of their learning experiences. Together, they offer a comprehensive view of program effectiveness.

 

When and how should we revise our assessment plans?

Assessment plans are multi-year (3-6 years) fixed frameworks that should remain unchanged until sufficient data has been collected (three consecutive years of data) or a significant change occurs that provides a strong rationale for modifications aimed at improvement. Annual reports document the implementation of these multi-year plans and include action plans that guide the next cycle of assessment.

 

How do we set good targets?

Targets should reflect professional judgment, based on data analysis and feedback from stakeholders. These targets can be revised as necessary, but should always be justified by evidence. Targets establish  what level of performance reflects satisfactory student learning and what percentage of students achieving that level constitutes acceptable program performance (e.g., ___% of students achieve a rubric score of _ on _ point scale). 

 

What are formative and summative assessments? Do we need both?

Formative assessments provide ongoing feedback early in the learning process, while summative assessments measure overall learning at the program's conclusion. Both are necessary for comprehensive evaluation and improvement of student learning.


  Report a Problem with this Page