The assessment cycle in higher education has been widely promoted for over a decade as a tool to improve teaching and learning by systematically defining student learning outcomes, measuring performance, analyzing data, and using results to inform instruction. Advocates argue that this process nurtures a culture of continuous improvement, transparency, and instructional refinement benefiting students, faculty, and institutions (Kuh, Jankowski, Ikenberry, & Kinzie, 2014; Simper, Mårtensson, Berry, & Maynard, 2022Recent scholarship further highlights the role of technology-enabled processes and enhanced data literacy in broadening the scope and transparency of assessment practices, allowing faculty and students to use assessment data more effectively to inform teaching and learning (Cui, Chen, Lutsyk, Leighton, & Cutumisu, 2023).
Accreditation bodies began emphasizing student learning outcomes–based assessment in the late 20th century in response to growing accountability demands. Regional accreditors such as the Higher Learning Commission (HLC), the Middle States Commission on Higher Education (MSCHE), and the Western Association of Schools and Colleges (WASC) began requiring evidence that institutions measure and act on student learning outcomes as early as 2009 (Ewell, 2009). This marked a shift away from input-based metrics like faculty credentials or library collections toward focus on what students actually know and can do. However, this mandate arrived without dedicated funding or structural support, forcing institutions to absorb new assessment duties with existing capacity. As a result, many faculty members already juggling heavy course loads and service obligations lack the time, training, or incentives to engage meaningfully in assessment work (Bennett, Sloan, & Varner, 2023).
Because higher education funding models rarely include direct incentives tied to demonstrated student learning, assessment initiatives consistently take a back seat to institutional revenue metrics like enrollment, persistence, and graduation rates. As a result, the assessment cycle often devolves into an administrative obligation, driven more by accreditation requirements than genuine efforts to improve instruction. Faculty typically receive minimal or no compensation, course release, or targeted professional development to perform learning outcomes assessments, reinforcing perceptions of assessment as an unfunded mandate imposed from above. Spence (2022) found that faculty often perceive assessment tasks as disconnected from their everyday teaching responsibilities, exacerbated by limited institutional support, leading to frustration and skepticism about their value. Bennett, Sloan, and Varner (2023) similarly documented faculty sentiments that assessment efforts seldom translate into visible improvements in curriculum or teaching, reducing participation to mere compliance rather than meaningful pedagogical engagement.
Recent studies confirm serious limitations in the assessment cycle’s implementation. A national review by Holloman et al. (2021) found that most cycles fail to complete the final and arguably most important step: evaluating whether instructional changes actually enhance student learning outcomes. Corbo et al. (2014) similarly found that excessive institutional fragmentation, lack of coordination, and top-down governance often prevent meaningful instructional change, despite the presence of assessment structures. These systemic barriers delay the transformative use of assessment data and weaken the potential for innovation. Critics even argue that the cycle often subordinates pedagogy to compliance, reducing what could be a powerful educational tool to a mere reporting exercise (Brown, 2021). Without faculty buy-in, aligned incentives, and a direct connection to student learning, the cycle remains a closed loop of data collection rarely benefiting the students it was designed to serve.
In many institutions, faculty view assessment as an administrative burden that is disconnected from daily teaching (Banta & Palomba, 2015). When outcomes assessment fails to produce visible instructional benefits, faculty often dismiss it as “busywork.” Turner (2014) noted that instructors frequently limit their engagement to the bare minimum required for accreditation compliance. This compliance-driven mindset severely undermines the assessment cycle’s potential to uncover and address meaningful instructional gaps that could benefit student learning.
Jillian Kinzie former Director of the National Institute for Learning Outcomes Assessment (NILOA) has cautioned against overreliance on quantitative metrics (Kuh et al., 2014). Many institutions lean heavily on standardized tests, surveys, and rubric scores because they seem precise and align neatly with accreditation reporting. However, this emphasis can lead to “data overload,” where large datasets accumulate without sufficient interpretation or application ultimately limiting their value in guiding instructional improvement (Banta & Palomba, 2015). Recent research echoes this concern: a 2023 national survey found that faculty often feel overwhelmed by assessment data, leading to disengagement rather than meaningful pedagogical use (Bennett, Sloan, & Varner, 2023). Other studies have shown that quantitatively driven assessment cultures can marginalize qualitative, course-level innovations, effectively reducing assessment to a compliance exercise rather than a catalyst for teaching and learning excellence (Corbo et al., 2014; Oleson & Hora, 2014).
Closing the loop the final phase of the assessment cycle requires applying student learning outcome assessment evidence to drive curricular reform and then evaluating its effects. In reality, many institutions skip this step or address it superficially. Holloman et al. (2021) found that few programs consistently reassess whether implemented changes genuinely enhance learning outcomes. When faculty observe their assessment data disappearing without clear feedback or tangible influence, their motivation to participate in the assessment processes diminishes. Without sustained reflection and action, the cycle deteriorates into repetitive data collection and reporting, detached from genuine instructional improvement. Faculty interested in innovative assessment methods, such as project-based or interdisciplinary approaches, often find their efforts marginalized because existing rubrics emphasize accountability over meaningful pedagogical growth. Bennett, Sloan, and Varner (2023) similarly documented that faculty frequently perceive assessment tasks as compliance-driven activities, disconnected from their core instructional responsibilities and pedagogical goals.
Another systemic challenge is that student learning outcomes can be overly broad or ambiguous, making them difficult to assess meaningfully. For instance, phrases such as “students will develop critical thinking skills” may sound valuable, but they are often operationalized through simplistic rubrics. Such narrowing of complex constructs can lead to a phenomenon akin to “teaching to the metric,” where educators focus on whatever is easiest to measure rather than fostering learning as skill and competency attainment. Recent analyses support this concern: a systematic review by Pastore (2022) highlights how vague definitions and circular language severely undermine the validity and practical usefulness of learning outcomes assessment. Additionally, research by Kelley (2020) emphasizes that when outcomes are not clearly specified and measurable, assessments fail to capture genuine student growth and critical educational competencies. As a result, the original intention of focusing on meaningful student learning often gets overshadowed by institutional pressures to generate easily reportable numbers for accreditation, losing sight of the true complexity of student development.
The inconsistent implementation of assessment of student learning across programs weakens its overall impact. Business schools, for example, might use standardized tests to satisfy accreditation (e.g., AACSB), whereas arts faculty rely on portfolios or juried performances an inconsistency that hampers meaningful comparison and campus-wide dialogue (Banta & Palomba, 2015). The term “assessment” itself is often ambiguous, sometimes referring to institutional effectiveness measures (such as retention, graduation, or transfer rates) rather than direct evaluation of student learning obscuring whether students are truly developing deep knowledge or skills (Kuh et al., 2014). Institutional efforts frequently conflate systems-level accountability metrics with meaningful assessment of student learning outcomes, prioritizing easily reportable statistics over genuine feedback on learning. A recent longitudinal study found that many physics instructors continue lecturing out of convenience and tradition, even when reform-driven evaluation metrics are in place (Dancy, Henderson, Stains, & Raker, 2024). Furthermore, Callaghan et al. (2025) showed that innovative practices like two-stage exams thrive only where departmental norms support reflective assessment without such support, even data-driven reforms are short-lived. Leadership changes add further instability: when new initiatives supplant old ones, faculty already overloaded see their assessment efforts discarded and soon disengage (Turner & Lumadue, 2014).
A notable shortcoming of most student learning assessment cycles is the lack of direct student involvement. Too often, students are treated as passive subjects rather than active contributors to the assessment process; they are rarely invited to help design assessments, analyze findings, or shape instructional responses (Banta & Palomba, 2015). This is despite growing evidence that involving students fosters deeper engagement, more meaningful feedback, and better attainment of the intended learning outcomes. Cook Sather, Bovill, and Felten (2014) demonstrate how student–faculty partnerships in assessment and teaching distribute responsibility and uncover critical student perspectives often overlooked by faculty. A 2024 systematic review by Fleckney, Thompson, and Vaz-Serra confirms that peer assessment when well-designed, scaffolded, and supported significantly enhances students’ evaluative capacities and academic performance across diverse learning environments. Similarly, Fraile, Panadero, and Pardo (2017) found that when students co-create rubric criteria, their self-regulated learning, self-efficacy, and performance improve significantly. Without student voices, assessment cycles risk reinforcing institutional blind spots and generating metrics that fail to address learners’ real challenges and needs.
Another serious shortcoming is that students are rarely informed of assessment results. Data are often aggregated, delayed, and released according to institutional timelines that prioritize external reporting not student learning. For example, some colleges assess outcomes like critical thinking in the fall, communication in the spring, and leadership the following fall a system intended to reduce faculty workload but one that completely decouples assessment from students’ lived academic experience. A student in the spring may never learn how well they communicate because that outcome was not assessed during their course. Research confirms that timely, specific feedback is essential for student motivation and growth, yet feedback practices remain inconsistent in higher education (Williams, 2024). Students frequently report that assessment feedback is vague, delayed, or not actionable leaving them unsure of how to improve (Holt, Sun, & Davies, 2024). This compartmentalized, time-bound approach is not just ineffective, it is fundamentally misguided. It treats assessment results as institutional checkboxes rather than as timely, student-facing information that supports learning while it is still happening.
Finally, there is a deeply entrenched inertia in both teaching and institutional policies, an unwillingness or inability to change long-standing practices even when they no longer align with contemporary learning goals. U.S. physics faculty, for instance, report that although they are aware of research-based instructional strategies, most continue to lecture extensively citing time constraints, perceived lack of support, and institutional culture as key barriers (Dancy et al., 2022). Professional development research further confirms this: lasting pedagogical change requires ongoing institutional support, collaboration, and coaching elements often missing in higher education (Darling-Hammond et al., 2017). Administrators frequently favor familiar and easily reportable metrics pass rates, course completions, job placement because they satisfy accreditor demands and external stakeholders quickly. Meanwhile, STEM faculty who support innovative assessment techniques like collaborative testing and project-based evaluation encounter low adoption rates due to entrenched pedagogical beliefs and institutional inertia (Dinglasan & Weible, 2025). When faculty evaluation and departmental funding rely solely on surface indicators, and new approaches receive no institutional support or are reversed due to leadership changes, the assessment cycle becomes mired in repetitive routines that favor compliance over genuine instructional improvement.
While the assessment cycle in higher education has the potential to drive meaningful improvements in teaching and learning, its current implementation is frequently sidelined by systemic challenges. Instead of acting as a transformative process, it is too often viewed and practiced as a bureaucratic requirement particularly when it lacks dedicated funding, faculty buy-in, and student involvement. The overreliance on quantitative data tends to obscure richer competencies and misaligned institutional incentives further diminish its impact. Most critically, many institutions never truly “close the loop” they do not evaluate whether instructional changes based on assessment activities actually improve student learning.
To realize the assessment cycle’s promise, institutions must reimagine their priorities, allocate appropriate resources, and embrace more inclusive, flexible approaches to learning. Key strategies include integrating student voices, establishing clear and meaningful to students and faculty learning outcomes, and creating feedback systems that support instruction in real time. Without these reforms, assessment will likely continue to operate as an unfunded mandate serving external accountability rather than enhancing student learning.
Author
Dr. Jarek Janio has been working in higher education for over 20 years. He founded an Annual Student Learning Outcomes (SLO) Symposium in 2014, and with the arrival of Covid, he started Friday SLO Talks, weekly events that attract faculty and assessment practitioners from all over the country and abroad. Dr. Janio is currently working at Santa Ana College in Southern California as faculty coordinator at the School of Continuing Education.
Works Cited
Banta, T. W., & Palomba, C. A. (2015). Assessment essentials: Planning, implementing, and improving assessment in higher education (2nd ed.). Jossey-Bass.
Bennett, L. K. L., Sloan, K., & Varner, T. L. (2023). Faculty and assessment practitioner needs for student learning outcomes assessment in higher education. Intersection: A Journal at the Intersection of Assessment and Learning, 4(2). https://doi.org/10.61669/001c.84194
Brown, K. (2021, Winter). Anti-intellectualism in academia and learning-oriented assessment. Academe, 107(1). Retrieved from https://www.aaup.org/article/anti-intellectualism-academia-and-learning-oriented-assessment
Cook Sather, A., Bovill, C., & Felten, P. (2014). Engaging students as partners in learning and teaching: A guide for faculty. Jossey Bass.
Corbo, J. C., Reinholz, D. L., Dancy, M. H., Deetz, S., & Finkelstein, N. (2016). Framework for transforming departmental culture to support educational innovation. Physical Review Physics Education Research, 12(1), Article 010113. https://doi.org/10.1103/PhysRevPhysEducRes.12.010113
Cui, Y., Chen, F., Lutsyk, A., Leighton, J., & Cutumisu, M. (2023). Data literacy assessments: A systematic literature review. Assessment in Education: Principles, Policy & Practice, 30(1), 1–21. https://doi.org/10.1080/0969594X.2023.2182737
Dancy, M. H., Henderson, C., Stains, M., & Raker, J. R. (2024). Examining instructional change: A longitudinal study of college physics faculty. Physical Review Physics Education Research, 20(1), Article 010119. https://doi.org/10.1103/PhysRevPhysEducRes.20.010119
Darling-Hammond, L., Hyler, M. E., & Gardner, M. (2017). Effective teacher professional development. Learning Policy Institute. https://learningpolicyinstitute.org/product/effective-teacher-professional-development-report
Dinglasan, A. J., & Weible, J. L. (2025). Higher education STEM faculty views on collaborative assessment and group testing. Journal of Research in Science, Mathematics and Technology Education, 8(SI), 315–335. https://doi.org/10.31756/jrsmte.4113SI
Ewell, P. T. (2009, November). Assessment, accountability, and improvement: Revisiting the tension (Occasional Paper No. 1). National Institute for Learning Outcomes Assessment. Retrieved from https://www.scirp.org/reference/referencespapers?referenceid=2545368
Fleckney, P., Thompson, J., & Vaz-Serra, P. (2024). Designing effective peer assessment processes in higher education: A systematic review. Higher Education Research & Development. Advance online publication. https://doi.org/10.1080/07294360.2024.2407083
Fraile, J., Panadero, E., & Pardo, R. (2017). Co creating rubrics: The effects on self regulated learning, self efficacy, and performance of establishing assessment criteria with students. Studies in Educational Evaluation, 53, 69–76. https://doi.org/10.1016/j.stueduc.2017.03.003
Holloman, T. L., Smith, A., Riegle Crumb, C., & Pawley, A. L. (2021). The assessment cycle: Insights from a systematic literature review on broadening participation in engineering and computer science. Journal of Engineering Education, 110(4), 829–857. https://doi.org/10.1002/jee.20425
Holt, D., Sun, X., & Davies, B. (2024, August 19). Assessment feedback: What do students want and need. Journal of University Teaching & Learning Practice, 21(9). https://doi.org/10.53761/tv2dfa83
Kelley, C. L. (2020). Writing specific and measurable learning outcomes: A faculty instructional guide. Fairleigh Dickinson University. Retrieved from https://www.fdu.edu/wp-content/uploads/2020/01/SLOAGpart2.pdf
Kuh, G. D., Jankowski, N., Ikenberry, S. O., & Kinzie, J. (2014). Knowing what students know and can do: The current state of student learning outcomes assessment in U.S. colleges and universities. National Institute for Learning Outcomes Assessment. Retrieved from https://www.minotstateu.edu/Academic/_documents/assessment/Knowing-What-Students-Know-and-Can-Do.pdf
Oleson, A., & Hora, M. T. (2014). Teaching the way they were taught? Revisiting the sources of teaching knowledge and the role of prior experience in shaping faculty teaching practices. Higher Education, 68(1), 29–45. https://doi.org/10.1007/s10734-013-9678-9
Pastore, S. (2022). Assessment literacy in the higher education context: A critical review. Intersection: A Journal at the Intersection of Assessment and Learning, 4. https://doi.org/10.61669/001c.39702
Simper, N., Mårtensson, K., Berry, A., & Maynard, N. (2022). Assessment cultures in higher education: Reducing barriers and enabling change. Assessment & Evaluation in Higher Education, 47(7), 1016–1029. https://doi.org/10.1080/02602938.2021.1983770
Spence, M. E. (2022). How faculty perceive their role in student learning assessment and program improvement (Doctoral dissertation, University of Arkansas). Retrieved from https://scholarworks.uark.edu/etd/4800
Topping, K. J., & Ehly, S. W. (Eds.). (1998). Peer-assisted learning. Lawrence Erlbaum Associates.
Turner, C. C. (2014). At cross purposes: Faculty perceptions of outcomes assessment and program change (Doctoral dissertation, Texas A&M University-Commerce). Retrieved from https://digitalcommons.tamuc.edu/etd/516/
Watson, C. E., & Werder, C. (2016). Engaging faculty in assessment: A practical guide to institutional strategies. American Journal of Business Education, 9(5), 221–228. https://files.eric.ed.gov/fulltext/EJ1194243.pdf
Williams, A. (2024). Delivering effective student feedback in higher education: An evaluation of the challenges and best practice. International Journal of Research in Education and Science, 10(2), 473–501. https://doi.org/10.46328/ijres.3404


No comments yet.