The Assessment Council of the City University of New York gratefully acknowledges Wiley Online Library and New Directions for Community Colleges (Jossey-Bass New Directions Series) for permission to reprint this article from Volume 2019, Issue 186, What Works in Assessment

Abstract

This chapter affirms the increasing value of outcomes assessment at one of many institutions where once it was mostly ignored, how the knowledge afforded by increased use of assessment functions as a precursor to change, and how the faculty and staff work of program and outcomes assessment—discussions, decisions, preparations, actions, revisions—helps maintain an evolving, dynamic approach to educating community college students.

Over the past decade, resistance to assessment has dwindled, no doubt because of two factors. One is compliance. Regional accreditors across the country are exerting more pressure and enforcing stricter oversight at institutes of higher education under the pressure of the federal government. Rather than risk a monitoring report or be placed on warning, colleges have hired assessment directors, sent teams of faculty and administrators to assessment institutes, granted reassigned time to faculty to serve on assessment committees, absorbed the vocabulary of assessment—SLOs, program/course alignment, mapping, closing the loop—rewritten mission statements, revised general education programs, purchased software, and detailed these efforts in the latest editions of college websites and catalogs to prove to stakeholders that assessment is underway and expanding at their institutions.

Two, more importantly, faculty are finding value in the process of assessment. In one of the earliest popular assessment texts, Assessing Student Learning: A Common Sense Guide (Suskie, 2004), Linda Suskie lists the reasons she believes faculty might resist assessment: They have misconceptions, do not see the relevance, are satisfied with status quo, too busy, have seen too many initiatives that went nowhere, devote more time to research than teaching, and are threatened by change. Faculty and staff sometimes react strongly when another initiative comes down the ranks, or administrators make demands, or regional accreditors are due to visit the campus. Some might still think it is possible to resist or sidestep assessment; however, they are becoming lone voices in the expanding discourse about course, program, and General Education assessment. At City University of New York’s (CUNY) Bronx Community College (BCC), recently hired faculty are required to take part in a year‐long New Faculty Seminar. The seminar serves as primer to the College and includes units on professional development taught by seasoned instructors. There are several units on classroom strategy and technique, including incorporating forms of assessment. New faculty come to understand that assessment is part of the professional duties at the College, that pedagogies to remain effective should be routinely assessed, and that the act of assessing student learning is central to effective teaching. They see the intrinsic value of asking how and why students acquire skills and knowledge.

Janet Heller, Chair of BCC’s Department of Health, Physical Education & Recreation, speaks to how assessment is a win/win for students and faculty. “Assessment has become an important part of our teaching. It helps us stay on track by teaching students what they need to learn to be successful in our programs and in their careers. It wasn’t always that way, but now assessment is built into all our courses and programs. As faculty we need to make sure our students are learning what we believe they need to learn, and we need to make sure we are teaching them in a way they can learn. Assessment benefits are reciprocal: we stay on track so that our students can stay on track” (Personal communication, April 13, 2017).

The Limits of Grades

The time‐honored teaching/grading paradigm—imparting information or skill‐sharing to students and evaluating them at predetermined points in the semester; totaling, averaging and awarding final grades to evaluate student performance—allows only a limited measure of a student’s learning, yet many institutions continue to honor the final grade as the credible measurement for intellectual achievement in college. Though the letter grade offers closure to both student and instructor at semester’s end, gives a broad reckoning of a student’s performance of the past 15 weeks, and provides for graduation or transferability to other institutions, it does little to inform the instructor, who might already be preparing for next semester’s classes, where his/her students particularly excelled, where they encountered specific difficulties, where they perhaps received too little instruction. Adhering to the convention of final grade/GPA, we assume a professional but also a limited understanding of a student’s intellectual skills or knowledge, of the learning that did or did not take place. While the transcript grade serves as a means to communicate among stakeholders, it does not indicate in which specific areas of the curriculum—which type of geometric equation, what area of essay development, where in a computer programming unit—the student excelled, stalled, or failed to perform.

The same holds true for the contract between instructor and student—the syllabus. It might tell us how a student’s final grade is computed, but grading, especially at community colleges, will often include points given or forfeited for attendance, points for going to tutoring, class participation, extra credit, and penalties for late work. While the promise of these points might enhance student participation and encourage students to take more responsibility for their learning, they do not indicate where struggling students went adrift, where they encountered obstacles, why some dropped out and halted their studies. The final grade often reflects elements of student activity that fall outside the realm of intellectual accomplishment.

An instructor whose astronomy syllabus states the course goals (what the instructor intends to accomplish) but does not state student learning outcomes (what the student is expected to demonstrate) might cover all 12 chapters on the universe, host 3 quizzes and 2 exams, require 5 lab reports, a final exam for any student averaging below A, and lay a foundation of knowledge for Astronomy 102, but leave gaps in the instructor’s knowledge of how students in the next iteration of their class might learn and perform better without compromising the required course work. The information for needed improvement might be found in the data collected, and once interpreted, discover to be a certain difficulty students are having reading a certain chapter in the text, in the professor’s presentation of class materials, in the wording of directions for group projects, in the student’s lack of technology skills. Assigning a course grade affirms the completion of extended study and a broad evaluation, but does not indicate a process of inquiry and improvement.

Summative outcomes assessment is performed to obtain a broad picture of learning and discovering ways to improve the overall effectiveness of the course or program for the next group of students; formative outcomes assessment concentrates on uncovering areas of weakness as the semester unfolds. In formative assessment, data are immediately analyzed and there is still time to implement a creative response, if necessary. The best knowledge an instructor might garner is detailed information on his/her students’ struggles, as well as a more informed perspective from which to make changes in a course or a program. By nature of its inquiry, assessment deepens our notions of our students’ skills and knowledge, makes us aware of our own expectations of our teaching (especially when it is not delivering the results we expect), suggests areas for additional research and often ways to enact change. Performed faithfully over the course of a few semesters, the assessment process reveals areas where students are experiencing difficulties. Where, when pursued, can lead to how and why.

When he was a student at BCC, Paul Jaijairam, Deputy Chair of BCC’s Department of Business and Information Systems, saw his fellow students struggle and sometimes drop out because they were unprepared for the challenge of college courses. “The truth is, our students need additional help, and so we view assessment as another tool in our drive for student success. We want our students to know what’s expected of them from day one; for example, what technologies they’re expected to learn in their course and program, and to what extent. We have twelve programs in our department and the student learning outcomes are on all our syllabi, and the SLO’s shape the trajectory of the course and program. We keep students informed and we keep their success foremost. We continually follow the path of student outcomes, and if students are not acquiring them, we assess and find out why. Five years ago, we were not as cognizant of our students’ biggest challenges” (Personal communication, April 13, 2017).

To remain relevant, any worthwhile practice must be regularly appraised, whether that be the proper way to land a helicopter, release a bowling ball, conduct a biology experiment, write an essay, or perform a yoga posture. With its attention to detail, assessment affords the faculty an opportunity to pinpoint areas of weakness; these might be in the text, in the presentation of material, in the proper sequentiality of classes in a program, or in testing materials. For those students requiring remedial attention, assessment helps to isolate troublesome areas and perhaps suggest methods to address them.

Beyond Compliance

In “From Compliance to Ownership” (2015), Stanley O. Ikenberry and George D. Kuh, suggest that to make assessment consequential, we need to first recognize that a good deal of assessment has fallen into the realm of “compliance.” They argue that the external pressures from stakeholders who have no role in teaching or assessment has created a state whose central activity, which should be a call to engage and to improve student learning, has become merely another activity whose value is suspected by the faculty, whose responsibilities include maintaining strong academic standards and whose experience and position puts them in the best possible position from which to view student accomplishment.

By defaulting to the demands and expectations of others, the purposes and approaches of learning outcomes assessment morphed over time into a compliance culture that has effectively separated the work of assessment from those individuals and groups on campus who most need evidence of student leaning and who are strategically positioned to apply assessment results productively. The assessment function—determining how well students are learning what institutions say they should know and be able to do—inadvertently become lodged at arm’s length from its natural allies, partners, and end users—including the faculty (5).

This state of affairs helps create initiative fatigue, a condition identified by Kuh and Pat Hutchinson (2015) and described as a “genuine heightened psychological and physiological state in which faculty and staff members feel overwhelmed by and sometimes conflicted about the number of improvement efforts to which institutional leaders and external authorities are asking them to devote time and effort” (p. 184); that condition worsens as new ideas tumble down from various stakeholders looking to justify their positions of authority or seeking a quick‐fix remedy to improve student learning. Of course, there are no quick fix remedies. The oft‐repeated phrase “closing the loop” indicates the ongoing, cyclical nature of assessment—a continuous progression on a circular path with relevant stops: Develop measurable outcomes; create a tool and measurement; assess student work against those outcomes; collect and interpret data; implement changes if necessary; revise outcomes if necessary; begin again. This type of process—active, ongoing—is what regional accreditors asks for, not impressive “scores.” Embedded assessment portals provide granulated understanding of how students learn.

The reasons why one student learns and another one does not are often mysterious. When it comes to the most important issue, student learning, Kuh and Hutchings (2015) quote Richard Chait of the Harvard Graduate School of Education, who suggests that faculty and staff stay focused on what’s important: “The main thing is to make sure the main thing is the main thing” (p. 195), a simple phrase that suggests student learning remains central to the task, that the commitment to student success remains central to our mission.

Building an Evidence‐Based Culture

What is an evidence‐based or assessment culture? While some might imagine reports chockfull of data with an addendum of blueprints for redesigning programs arriving at regular intervals at the Office of Assessment, an assessment culture is an evolving, inspirational mindset that comes about by continued diligence by all stakeholders. Such a culture has less to do with submitting crafted assessment reports on the due date or properly entering results into repository software and more to do with an open‐minded ideal of approaching one’s department, program, or course in a spirit of inquiry. Boiled down, an assessment culture is simply a shared ethos of faculty and staff across campus that maintain professional curiosity and are ready to seek, and if necessary, implement, new approaches to improve student success. What instructor has not had the baffling experience of discovering their successful approach to teaching one section in the morning falls flat in the afternoon section, or vice/versa, with no discernible reason why? One might say these are the mysteries of teaching, yes, but one might also look upon the distinct responses as an opportunity to examine more closely whether the needs of the afternoon students are just simply different and require a creative response.

An assessment culture indicates a spirit of continual inquiry that is campus‐wide, with professional development opportunities extending to departments of student services and student success programs. A college generates all kinds of experiences for a student. His/her experience in a classroom represents a significant investment of time, energy and money, but it can hardly be separated from their numerous interactions with staff. More and more the role of staff, especially those in student services, are beginning to be assessed. The great majority of our students have never stepped foot on a college campus and sometimes find the process overwhelming.

Karen Thomas, Registrar at BCC, works closely with Admissions and Enrollment services; together, they have developed assessment processes that minimize the frustration of our students as they make the many stops around campus to enroll, stay enrolled, and thrive in the college community. “There have been many positive effects from assessing our services. [Assessment] has brought our colleagues together in a collaboration we did not have before. One concern is the student who simply drops out of the process and leaves campus because she’s not used to dealing with a bureaucracy. For example, Admissions and the Health Department came together after assessing similar elements of their programs and worked out a way to simplify the immunization process on campus. It might not sound like much, but when we looked closely we saw that the process had glitches and that if we worked together we could smooth the process for entering freshmen. Recognizing how we share the goal of student success encourages us to share our concerns and expand collegiality across departments” (Personal communication, April 13, 2017).

At BCC, we have made small and large strides in developing an assessment culture. The credit for a more evidence‐based foundation goes to faculty and staff who have sought to look closely at all the ingredients that go into evaluating student work and performance. When students stay on goal and graduate, we are all inspired. It takes dedicated individuals to first get assessment going; it takes a village to maintain a culture of evidence‐based teaching and service.

To get the news from assessment takes time and effort, especially in the beginning stages of developing an assessment practice. The practice, however, takes less time and becomes more purposeful as it brings faculty and staff together, encouraging them to look closely at their missions, engage larger questions about their programs, and discover how to improve services and the transmission of knowledge and skills. For those who persist, new ways of seeing emerge. The well‐being of the student remains the priority; issues of compliance become secondary and not so difficult after all. Assessment raises our consciousness about what our students are learning, or, more to the point, what they are not learning that we somehow think or believe they are learning. Hundreds of books have been published, and thousands of templates and rubrics have been created over the past decade. We have learned that what one faculty member might find useful, another might find too proscriptive, but we work toward student success, knowing that improvement and not perfection is our goal. Increase in awareness is the gift that the assessment offers.

Author(s)

Richard LaManna is the Director of Academic and Student Success Assessment at Bronx Community College (CUNY).

References

  • Hutchings, P., & Kuh, G. D. (2015). Assessment and initiative fatigue: Keeping the focus on learning. In G. D. Kuh, S. O. Ikenberry, & P. Hutchings (Eds.), Using evidence of student learning to improve higher education (pp. 183– 200). San Francisco, CA: Jossey‐Bass.
  • Ikenberry, S. O., & Kuh, G. D. (2015). From compliance to ownership: Why and how college and universities assess student learning. In G. D. Kuh, S. O. Ikenberry, & P. Hutchings (Eds.), Using evidence of student learning to improve higher education (pp. 1– 23). San Francisco, CA: Jossey‐Bass.
  • Suskie, L. (2004). Assessing student learning: A common sense guide. Bolton, MA: Anker.
 

Leave a Reply

Your email address will not be published.

This site uses Akismet to reduce spam. Learn how your comment data is processed.

Set your Twitter account name in your settings to use the TwitterBar Section.
css.php
Need help with the Commons? Visit our
help page
Send us a message
Skip to toolbar