The Assessment Council of the City University of New York gratefully acknowledges Wiley Online Library and New Directions for Community Colleges (Jossey-Bass New Directions Series) for permission to reprint this article from Volume 2019, Issue 186, What Works in Assessment, for this inaugural issue of Assess@CUNY, a publication of the CUNY Assessment Council.
This chapter provides an overview of three assessment scholars whose work influenced the articles contained in this volume.
One of the earliest widely distributed and read statements of assessment was the 2004 book Assessing Student Learning: A Common Sense Guide by Prof. Linda Suskie, one of three important books on outcomes assessment published in that year. Fifteen years ago, the field of outcomes assessment was loosely structured and resources were limited. Government, regional accreditors, and the public were raising their voices for evidence of successful student learning in higher education, and America’s colleges were rushing to assess, no matter that few department chairs or program/course coordinators knew where to begin, much less how to convince faculty that assessment was not just another academic initiative. The rush to respond had two effects—confusion across academic departments, and the narrow understanding of assessment as necessary (and often painful) in order to maintain accreditation. Inevitably, there was a third effect: suspicion of the new lexicon making its way into academia—learning outcomes, assessment tools, rubrics, closing the loop, etc.
Prof. Suskie’s book was aptly subtitled. Common sense was needed to calm the waters by stabilizing the new lexicon and clarifying numerous misunderstandings surrounding assessment, as well as make clear that the priority was not to satisfy accreditors, but to develop reasonable, rewarding processes of using evidence to improve student learning in academic programs. Prof. Suskie’s Assessing Student Learning, now in its third edition, stressed common sense principles in what was then a “nascent discipline” (p. xiii) that had yet to develop models and set down its principles. Perhaps most importantly, Prof. Suskie’s book encouraged practitioners to begin deciding what were the most appropriate methods to assess their disciplines. It stressed flexibility and open‐mindedness and reminded readers that there were questions but “no simple answers” (p. xiv). No simple answers because learning is a complex activity, and outcomes assessment is simply a lens by which to view that activity and bring some common‐sense order to its continuous unfolding.
Another ground‐breaking text, Prof. Barbara Walvoord’s Assessment Clear and Simple: A Practical Guide for Institutions, Departments, and General Education, was published in the same year. Once more, the title spoke to the challenges, confusion, and complexity that characterized outcomes assessment in what we might call the “early years,” when there was much need (and perhaps desire) to perform assessment, but few guides existed that addressed the mechanisms of assessment from the departmental to institutional levels and confronted the challenges of resistance, including issues of academic freedom, student privacy, and the unfounded fear of the evaluation of faculty. Prof. Walvoord addressed what she called the two “cultures” of assessment that exist and sometimes clash on the college campus: the Managerial culture that values “specification of desired outcomes and objectives, use of data, and hierarchical systems” and the “Collegial” culture, “marked by high value on faculty independence and autonomy, as well as suspicion of any systematic procedure … applied to what they view as the subtle art of teaching” (p. 7). More than her colleagues, Walvoord saw in the assessment process, what she identified as “cross‐cultural communication,” a beehive network of concerned professionals speaking the shared language of improving student learning. Her chapters highlight evidence for that discussion underway in all corners of the college campus, from “institution‐wide planners” to developing rubrics to embedding assessment markers in course work and grading, and the different lenses through which General Education, an evolving process, can be scrutinized. Ultimately, she argued, the accountability that assessment brings to teaching not only satisfies requirements, but benefits students, faculty, and the institution at large by continually discussing and seeking ways to improve student learning. She provided step‐by‐step methods, plentiful examples and an abiding optimism in the process that helped buoy early assessment above the waves of suspicion, doubt, and resistance, much of which has dissipated as assessment enters the lexicon of higher education.
A third influential book, also published in 2004, developed a comprehensive view of assessment, Assessing for Learning: Building a Sustainable Commitment Across the Institution by Prof. Peggy L. Maki. Like her colleagues, Prof. Maki understood that the practice was still in early stages, though she was able to supply examples from universities and colleges around the country that had begun to engage assessment years earlier. More than its counterparts, Assessing for Learning assumed a broader perspective by assuming a holistic view of assessment, operating under the shared belief that the ultimate goal—to improve student learning—was to develop a culture of assessment across campus with elements of sustainability, always a challenge at even the most assessment‐minded institutes. At a recent well‐attended assessment conference, the keynote speaker asked the SRO audience how many worked at institutions where assessment was widely practiced; 90% of the attendees’ hands rose. She then asked how many of those with hands raised were aware of assessment results being interpreted and any appreciable changes worked into the next iteration of the program or course. Few hands remained in the air. Sustainability, Maki argues, can only be guaranteed when there is commitment. And, while it is heartening for directors and managers of assessment to receive assessment reports on a regular basis, no ongoing committee or council can succeed without “buy in” from the executive level individuals who foster the activity and supply needed support. Ongoing and collegial dialogues concerning teaching and learning that need to take place even before the mechanisms of assessment are established are significant only if the participants know they will be supported in attempts to address the challenges they uncover in their investigations.
What these early assessment texts and their scholar–authors achieved was to open the academy’s doors to doubt. Before assessment, how many college programs that were not required to report to outside accreditors were regularly taking stock of their programs? Or, mapping outcomes across a program’s courses to be certain that areas critical to the discipline were covered? These early assessment texts found enthusiastic audiences as instructors in classrooms and deans came to see that not all was well, that many students were leaving campus without the skills they needed and deserved. We were past the assumption that a college degree always meant that the recipient was “generally educated.”
In this issue of new directions in community colleges (NDCC), we hear from those educators whose doubt has led them to adopt various means of inquiry to determine if in fact students are getting what they need in the classes and the services that colleges promise incoming freshmen, from asking the “right question” at Johnson County Community College to large urban community colleges where general education serves to open up worlds and possibilities to students who perceive few options for themselves.
Scholarly contributions provide critical foundations for practitioners; Suskie would agree that jumpstarting a conversation about assessment by asking the right question, as Sheri Barrett outlines in Chapter 3, meets complexity head on. Doing so can often demystify a formidable mass of uncertainty about student learning and propel faculty to embark upon the Cycle of Assessment toward developing an integrated plan of action. Faculty navigation toward use of ePortfolio in Deidre Tyler’s and Emily Dibble’s “Toward Authentic Assessment: Using ePortfolio at Salt Lake Community College” also provides a clear example of Suskie’s beliefs about a need for open‐mindedness and appreciation for recognition that no simple answers exist when assessing student learning outcomes. Yet as faculty at Salt Lake Community College struggled to assess general education proficiency, over time their efforts led to the campus‐wide establishment of ePortfolio, a tool not only to gauge student learning but can also fuel student curiosity and make interdisciplinary connections, giving them room to explore new academic subjects more deeply. These are the kinds of results Suskie might hope for, as they underscore her belief that faculty tolerance for complexity and ambiguity are needed for normal progression toward destinations of clarity. A big believer in systemic evidence to inform decision‐making, Prof. Walvoord would argue for widening the reach of assessment to extend beyond faculty and to entice other communities, especially local stakeholders as described in Michael Roggow and Matthew Farron’s article about shaping business curricula by involving the local business community (Chapter 7, this issue). Persuading members of a business community to partner with college faculty in assessing an academic program might be easier if these members were to have a hand in designing program goals and objectives from the start. Consequently, faculty at Schenectady County Community College systematically included various community leaders in their semistructured interviews to better understand the skill‐based needs of the business community. Evidence collected during those interviews paved a way for reimagining a set of program goals and objectives. Yet resistance can still act as an undercurrent in myriad forms, especially among adjunct faculty who may feel they exist on the periphery, especially as they are often less integrated into the college culture compared to their full‐time colleagues. In Chapter 6, Harry Buffardi’s work on the role of adjuncts in conducting assessment highlights Walvoord’s stance on the perils of disconnection of assessment on campus culture. Specifically, some faculty may be left out of such movements because full‐time faculty might view this activity as only within their purview, or because part‐time faculty have not been adequately trained to assess, or perhaps because, as Blevins‐Knabe discusses in Chapter 4, community colleges often have fewer financial resources to support faculty development to generate campus cultures that use assessment findings constructively.
Best practices, as Maki suggests, require an inclusive commitment to assessment of student learning, which can only be established when it is meaningfully anchored in the educational values of an institution and designed to involve a range of faculty and staff. In Chapter 10, Akkaraju, Atamturktur, Broughton, and Frazer provide one of the fine examples, where formative assessment is key to meaningful interventions toward improved learning and retention of students at high‐risk for attrition. In Chapter 2, Richard LaManna demonstrates how various voices have come to rely on assessment to inform their areas of student contact in a college where more than 80% of students have remedial needs; from department chair to classroom instructor to registrar’s office, all note the critical role that conscious and continual assessment occupies in their decision‐making. Maki’s holistic view of assessment, and its ultimate goal of improving student success, is clearly evident in this journal’s contributions, affording a range of approaches and culminating in a renewal of academic purpose, and, one might say, resilience.
Following the publications of seminal texts by Suskie, Walvoord, and Maki in 2004 and the nation‐wide increase in conferences, institutes, workshops, and scholarship that has followed, we see that assessment has become more a necessity, less an imposition from regional and outside agencies, for expanding our understanding of what is occurring in our community college classrooms. As research deepens and publications such as this one reach intended audiences, we are likely to find more faculty and staff asking difficult questions, such as why so many of our students graduate or transfer lacking specific skills that we continue to take for granted. A community college earns respect in the educational quality of the students it graduates. How ready are they for entering a career or transferring to a higher institute of learning? Assessment is vital process to measuring their readiness.
Richard LaManna, Director of Academic and Student Success Assessment, Bronx Community College, CUNY.
Michael Roggow, Dean of Business & Technology Departments, Massasoit Community College.
- Maki, P. L. (2002). Assessing for learning: Building a sustainable commitment across the institution. Sterling, VA: American Association for Higher Education.
- Suskie, L. (2004). Assessing student learning: A common sense guide. Bolton, MA: Anker.
- Walvoord, B. E. (2004). Assessment clear and simple: A practical guide for institutions, departments, and general education. San Francisco, CA: Jossey‐Bass.
This entry is licensed under a Creative Commons Attribution-NonCommercial-ShareAlike 4.0 International license.
- Ensuring Fairness in Unprecedented Times: Grading Our Nation’s Students on
- Ensuring Fairness in Unprecedented Times: Grading Our Nation’s Students on
- Combatting a Compliance Mindset by Advocating for Betterment on
- From the Assess@CUNY Staff (Vol 1, Issue 2) on
- The Potential and Pitfalls of Net Promoter Scores (NPS) as a “Business World” Metric in Academic Assessment on
- Academic Affairs
- Administrative Units / AES
- Education Reform
- Formative Assessment
- Fostering Culture
- General Education
- Literature Reviews
- Spotlight on Methods
- Spotlight on People
- Success Stories
- Vol 1, Issue 1 (Feb 2020)
- Vol 1, Issue 2 (Jul 2020)
- Vol 2, Issue 1 (Sept 2020)
- Vol 3, Issue 1 (Nov 2021)
- Vol 3, Issue 2 (Apr 2022)
- Vol 4, Issue 1 (May 2023)