Academic assessment at Pratt is an iterative process where faculty play a central role in developing, reviewing, and assessing curriculum, pedagogy, and student learning. This page outlines how to get started with assessment, including understanding what assessment is, how it is conducted, and why it is essential. It concludes with frequently asked questions. Additional sections of the assessment LibGuide provide resources to support ongoing learning in assessment. For more information, feel free to review the Academic Assessment Policy.
Academic assessment in higher education refers to the process through which institutions evaluate and measure students' learning, academic achievements, and competencies. This process includes various evaluation methods, such as major assignments (e.g., projects, portfolios, exams), to determine the extent to which students have met the learning outcomes or objectives of a course or program. Assessment is essential because "learning needs an assessment ‘event’ to establish the scope and depth of what has been learned" (Grainger & Weir, 2020, 9).
The purpose of academic assessment is not only to gauge student progress but also to inform teaching practices, improve educational quality, and ensure that educational standards are met. Assessment in higher education is a complex and ongoing process that goes beyond assigning grades. It involves gathering data, providing feedback, and making adjustments to teaching and learning, as well as improving curriculum and programs. When done effectively, assessment helps ensure that students meet course objectives and develop the skills and knowledge needed for future academic and professional success. By balancing various assessment types, aligning them with learning goals, and using technology to improve efficiency and accessibility, institutions can create an effective learning environment for all students.
At Pratt, academic assessment involves both institutional and program-level evaluation of student learning. This is achieved by defining Student Learning Outcomes (SLOs) and identifying how students demonstrate those outcomes. Programs are tasked with providing evidence of student learning through multiple methods:
Individual student learning assessment: Provides insights into individual student performance, supporting personal growth.
Course-level learning assessment: Measures student learning within a specific course, aiding instructors in improving teaching practices.
Program-level learning assessment: Assesses student learning across an entire program, helping faculty make improvements for better student outcomes.
The assessment process involves faculty, administrators, students, and stakeholders such as alumni and industry partners. Faculty play a pivotal role in ensuring that students demonstrate the program's learning outcomes, while also contributing to a process of continuous improvement when students fall short of learning achievement targets. Assessment also helps meet state, federal, and accreditation guidelines. It is beneficial to involve various stakeholders (e.g., faculty, students, alumni, employers) in the assessment process to ensure a comprehensive understanding of student learning.
Effective assessment at Pratt is collaborative and should actively involve those who design and deliver learning opportunities, as well as the recipients of these opportunities. Assessment is cyclical and includes the following elements:
Assessment is an ongoing process with activities occurring each year, informed by the multi-year assessment plan. The following timeline outlines key assessment activities:
Annual Assessment Plan: From the multi-year plan, academic programs create and implement annual assessments.
Annual Assessment Report (Due September 30): The report should include the implementation of the assessment methods, findings from the assessment, and how the results were used to improve student learning.
When creating SLOs, consider the following:
Tip: Monitor 3–4 outcomes for three consecutive years, then alternate the next set for the following three years.
For additional help, see the Learning Outcomes section of the Assessment LibGuide.
A curriculum map provides a visual overview of where students have the opportunity to learn each SLO, where evidence of learning is collected, and where interventions may be needed. Ensure that courses align with program and institutional learning outcomes, and that they scaffold learning appropriately across the program.
For additional help, see the Curriculum Mapping section of the Assessment LibGuide.
Assessment methods should allow students to demonstrate learning as specified by the SLOs. The methods should include a mix of assignment types (e.g., portfolios, capstone projects, performance evaluations), modes of expression (e.g., visual, written, oral), and both direct and indirect sources of evidence. Direct and indirect assessment as well as formative and summative assessments. Direct Assessment assesses students' direct work. Indirect assessment refers to self-reported data such as surveys, self-reflection, graduation rates, etc. Formative assessments provide continuous feedback early in the learning process, while summative assessments evaluate student learning at the end of the course or program.
Things to consider:
Data should be prepared in a way that encourages productive discussion and decision-making. It should be disaggregated by individual SLOs, presented meaningfully (e.g., in tables or graphs), and periodically reported to appropriate stakeholders. Assessment is only as good as the conversations it initiates and the actions it spurs. Collected data should indicate:
It is vital to involve the community of stakeholders in the process of making sense of evidence and proposing improvement actions as needed. This helps mitigate bias by engaging parties with a variety of perspectives, experiences, and insights.
Involving stakeholders—such as students, faculty, alumni, and employers—helps make sense of data and proposed improvements. This ensures that the assessment process is objective and informed by diverse perspectives. Collaborating with students in this work offers advantages for both faculty and students. For students, it can lead to greater engagement, motivation, confidence, metacognitive skills, and a sense of responsibility for their learning. For faculty, the benefits include improved student engagement, a shift in perspective on teaching, a deeper understanding of learning from diverse viewpoints, and a more reflective and adaptive teaching approach. For example, during the reflective phase, rubrics can help students assess the quality of their work based on established goals, rather than comparing themselves to others (Grainger & Weir, 2020).
Coursedog is the Pratt Assessment platform. Canvas, the official Learning Management System (LMS), has rubric features which can assess student work as well as aggregate and disaggregate student learning data. Canvas integrates with Coursedog to streamline access of direct student data for reporting.
GPAs, course grades, and assignment grades are not reliable indicators of whether students have achieved program SLOs. Debate among assessment practitioners include issues of variability, subjectivity, and the educational value of grading in this context (Innovative Assessment in Higher Education: A Handbook for Academic Practitioners, 2019). A better approach is to use rubrics that are directly tied to specific learning outcomes, ensuring that scores are based solely on relevant criteria.
Direct assessments (e.g., rubric-based evaluations) provide concrete data on student performance, while indirect assessments (e.g., surveys, focus groups) capture students’ perceptions of their learning experiences. Together, they offer a comprehensive view of program effectiveness.
Assessment plans are multi-year (3-6 years) fixed frameworks that should remain unchanged until sufficient data has been collected (three consecutive years of data) or a significant change occurs that provides a strong rationale for modifications aimed at improvement. Annual reports document the implementation of these multi-year plans and include action plans that guide the next cycle of assessment.
Targets should reflect professional judgment, based on data analysis and feedback from stakeholders. These targets can be revised as necessary, but should always be justified by evidence. Targets establish what level of performance reflects satisfactory student learning and what percentage of students achieving that level constitutes acceptable program performance (e.g., ___% of students achieve a rubric score of _ on _ point scale).
Formative assessments provide ongoing feedback early in the learning process, while summative assessments measure overall learning at the program's conclusion. Both are necessary for comprehensive evaluation and improvement of student learning.