Conducting Comprehensive Assessment Within Community Colleges: Administrative, Educational, and Student Support (AES) Assessment with the Shults Dorimé‐Williams Taxonomy

The Assessment Council of the City University of New York gratefully acknowledges Wiley Online Library and New Directions for Community Colleges (Jossey-Bass New Directions Series) for permission to reprint this article from Volume 2019, Issue 186, What Works in Assessment, for this inaugural issue of Assess@CUNY, a publication of the CUNY Assessment Council. 

Abstract

This chapter introduces the Borough of Manhattan Community College’s Institutional Effectiveness Assessment Plan and the Shults Dorimé‐Williams Taxonomy as resources for the enhancement of assessment within administrative, educational, and student support units.

Evolving external demands, which include requirements that colleges and universities examine their administrative, educational, and student support (AES) units, reflect fundamental changes within the primary regulatory bodies responsible for higher education accountability—regional accreditors. There are seven regional, nationally recognized accreditors of traditional postsecondary education institutions. Historically, accreditors have focused on academic programs, continuing education, and vocational training; however, the publication of A Nation at Risk: The Imperative for Educational Reform (US Department of Education, 1983) led to increased pressure on regional accreditors to evaluate student outcomes and institutional effectiveness. In the 1980s, the Southern Association of Colleges and Schools (SACS) became the first regional body to explicitly demand that colleges assess student learning and evaluate educational effectiveness inside and outside the classroom. This was largely the result of increased scrutiny from the federal government. Rather than inventorying inputs and amassing institutional resources, the attention shifted to effectiveness, relevance, and appropriateness of institutional assessment processes and systems (Alfred, 2011; Brittingham, 2009; McGuire, 2009).

Institutional effectiveness has become increasingly important during institutional reaccreditation due to governmental and civic pressures demanding evidence of value and justification of public financial investment in higher education (Alfred, 2011; Ewell, 2011). Put simply, institutional effectiveness is a measure of how well an institution is achieving its mission (Ewell, 2011; Middaugh, 2010). By virtue of this definition, institutional effectiveness necessarily varies from institution to institution. This is especially true for community colleges, which have long attempted to be all things to all people. Community colleges must reconsider the evaluation of institutional effectiveness. This requires the addition of AES assessment into operations and recognizing their direct and indirect impact on student learning and success. Institutional effectiveness requires each area within an institution to have clearly defined missions, goals, and outcomes that are aligned with the institutional mission and goals. We will discuss the importance of this to assessment and planning later in the chapter.

Institutional Effectiveness Plan

Borough of Manhattan Community College (BMCC) is one of the 24 colleges in the City University of New York (CUNY) system. BMCC is the largest institution in the system with enrollments that regularly surpass 27,000. As an urban college with one of the most racially diverse (more than 85% of the students come from non‐White populations) and economically disadvantaged student populations in the country, the College has committed itself to increasing retention, transfer to baccalaureate institutions, and graduation rates. AES assessment and evaluation, as a foundation of an institutional focus on assessing and improving student learning and the environment for student success, has become a priority.

Initiated in 2016, more than 50 units have been identified and the Office of Institutional Effectiveness and Analytics (IEA) has worked to develop an assessment framework that includes unique missions, goals, and student learning and support outcomes for each. The Institutional Effectiveness Plan documents the College’s philosophy and approach to assessment and is publicly available on the institution’s website. Guided by an integrated approach that aligns assessment, planning, and resource allocation, it articulates expectations for academic and AES units. By linking the AES assessment process to institutional planning efforts, institutional effectiveness reporting, and the resource allocation process, the College has ensured that AES assessment supports institutional effectiveness and decision‐making, while also meeting external requirements.

Engaging in Student Learning Assessment Outside Academic Programs

Broadening assessment practices outside of the classroom contributes to a holistic understanding of student learning while offering additional avenues to support student success. However, assessment outside of academic areas, if addressed, is still in its nascent stages at many institutions. This section describes the development of assessment in AES units and highlights gaps in approaches to assessment.

Bloom’s taxonomy, developed in 1965 by Benjamin Bloom, is the standard, hierarchical model that classifies educational learning objectives by levels of complexity and specificity (Boslaugh, 2019). Revised in 2001, this enhanced model provides a framework for educators, practitioners, and support professionals to structure curriculum, learning objectives, desired outcomes, and assessment activities. The refined categories include: knowledge, comprehension, application, analysis, synthesis, and evaluation. This resource, however, is largely unhelpful to college support and service areas. Historically exempt from systematic assessment, these units are now building efficacy among staff to understand the why and how of assessment (Elkins, 2015; Roberts, 2012; Upcraft & Schuh, 1996). Among the few AES units that have regularly engaged in assessment are those located within student affairs.

Student Affairs is a logical starting point for a discussion about assessment in AES units due to the resources available that have increased the efficiency and efficacy of practitioners assessing student learning. These departments, since the 1980s, have had the benefit of guidance and assessment benchmarks from numerous professional organization (e.g., Council for the Advancement of Standards in Higher Education [CAS], Student Affairs Administrators in Higher Education [NASPA], and the American College Personnel Association [ACPA]) (ACPA, 2018; CAS, 2015; Elkins, 2015; Henning & Roberts, 2016; NASPA, 2018). Knowledge of best practices, which was disseminated through conferences, institutes, publications, and training has provided a framework for relevant and meaningful systematic assessment.

While assessment practice and acceptance has grown, traditional approaches focusing on assessing student learning in curricular and cocurricular settings (Busby, 2015; Elkins, 2015) fail to consider the full scope of the student experience. This practice is evidenced by scholars and practitioners who argue for explicit connections between the work of student affairs professionals and the academic mission of an institution (Upcraft & Schuh, 1996). We have observed that the rush to increase assessment activities wrongly assumed that all areas should and must articulate and measure student learning outcomes in a manner similar to academic departments. In reviewing a diversity of institutions, we found that assessment language, even for AES units, typically referred only to the assessment of student learning outcomes. The obvious result is “checklist assessment” deemed irrelevant to many areas and devoid of an appreciation for the unique contributions that all units make in enhancing student success and improving the student experience. In an attempt to meet these expectations, AES units were relegated to measuring student satisfaction, which is often an indirect measure of the level of satisfaction with services and support. Forcing units to examine only student learning and student‐centered outcomes, especially within units that do not directly interact with students, hinders individuals responsible for supporting AES assessment. This approach reflects an exclusionary philosophy toward nonstudent‐facing offices that is likely to derail any meaningful assessment and relegates “nonstudent learning” focused units to second tier status.

AES Assessment at BMCC

Fortunately, over the past decade, colleges and universities wrestling with the notion of AES assessment have developed “nonacademic” outcomes (Henning & Roberts, 2016). These can be referred to as administrative, operational, service, or support outcomes. These outcomes have not benefited from a cohesive, comprehensive model as with Bloom’s. The BMCC Institutional Effectiveness Plan coupled with the Shults Dorimé‐Williams (SDW) taxonomy addresses these issues through the creation of a framework that is accessible to all AES units.

The BMCC IE plan incorporates accountability, reporting, and measurement to communicate practices that ensure all departments and units document evidence of their unique contributes to achievement of the institutional mission. Within the plan, AES assessment is operationalized to specifically address the effectiveness of the day‐to‐day activities and responsibilities of all units and departments. Adapted from the AAHE Principle of Good Assessment (Astin et al., 1992), the BMCC plan codifies a philosophy that encompasses the following:

  • Assessment works best when the programs it seeks to improve have clear, explicitly stated purposes;
  • Assessment requires attention to outcomes, but also and equally to the experiences (or activities) that lead to those outcomes;
  • Assessment works best when it is ongoing not episodic;
  • Assessment fosters wider improvement when representatives from across the community are involved;
  • Assessment is most likely to lead to improvement when it is part of a larger set of conditions that promote change; and
  • Through assessment, educators meet responsibilities to students and to the public.

These practices apply to all assessment activities and solidify the role that all units and departments play in realizing the institutional mission. With leadership from IEA, the first step in establishing and implementing this comprehensive and inclusive philosophy was to develop the IE plan. To increase engagement, communicate the importance of the plan, and ensure relevance, the office worked with the Academic Program and AES Assessment councils to develop and approve the plan. After the plan was approved, the President’s Cabinet, consisting of individuals overseeing all major functional areas of the institution, identified AES units at the institution; there are currently 44 identified units. Each area was informed of the new assessment expectations and began working with staff from IEA to develop missions, goals, outcomes, and assessment plans. The process was far from smooth given the new requirements, lack of knowledge about assessment, and desire for certain units to retain control over their operational activities. However, the process has grown organically and more units are engaging in both annual assessment and periodic evaluation. IEA determined that the diversity of AES units would require a flexible framework, complete with more technically appropriate language, to aid units in realizing the potential benefits of assessment. As a result, an assessment and planning framework was developed to demonstrate institutional alignment and the importance of having clearly stated missions, goals, and outcomes.

The assessment and planning framework was built from the BMCC IEP plan and an understanding of assessment as a tool to help various functional areas understand the connection between their work and the overall mission of the institution. A unit’s mission statement is aligned with the institutional mission statement and establishes the basis for goal and outcome development, as well as assessment, evaluation, and planning for a unit (divisions, departments, or single offices). Unit goals are clear, meaningful statements of the unit’s purpose that represent the day‐to‐day functions. They are derived from the unit mission statement, but are also aligned with an institutional goal. Finally, in the BMCC framework, there are two types of outcomes—student learning outcomes, which reflect what students will know, think, or do as a result of unit efforts and support outcomes, which detail expectations regarding the delivery of services, processes, activities, or functions to students, faculty, or staff.

Shults Dorimé‐Williams Taxonomy

The SDW taxonomy is a resource for the development of support outcomes in the same way that Bloom’s taxonomy functions for the development of student learning outcomes. This taxonomy, along with the verb wheel and descriptors list, has been a vital tool in helping BMCC’s AES units clearly define their mission and goals, develop support outcomes, and develop formal assessment plans.

Development

While working with AES units, we observed a gap in the literature and in resources available that focused on administrative or support outcomes. For many AES units, student learning outcomes simply are not appropriate or applicable. To address this void, we created the SDW taxonomy. Based on previous assessment frameworks, models, and language (Henning & Roberts, 2016; Terenzini & Upcraft, 1996; University of Texas Arlington, 2014), the taxonomy provides a common language for all AES units. Similar to Bloom’s, there is a hierarchy; however, instead of distinguished levels of learning, it details levels of administrative task complexity. The levels of complexity, from least to most, are as follows: delivery, maintenance, management, development, integration, and analysis.

To support the taxonomy, an action verb wheel was included to provide an inventory of appropriate, relevant terms aimed at ensuring measurability as well as a descriptors list with examples of assessment measures.

Implementation

The SDW taxonomy became an indispensable tool in the institution’s assessment efforts. Units that had previously been excluded from or lacked adequate resources to describe and measure their work became much more involved in the college’s assessment process. Areas such as Public and External Affairs and Facilities were able to create assessment plans that spoke to their work without forcing student learning into their outcomes. For example, Public and External Affairs developed unit outcomes stating that they would “enhance communications efforts with internal and external audiences” and “increase the amount of news coverage on local and national levels.” Subsequently, their assessment plan included concrete metrics and tools to measure news coverage of BMCC. They also began to track overall communication efforts across multiple platforms and to track increases in these efforts over time. Facilities’ outcome to “baseline energy use among campus buildings and reduce energy usage starting with electricity” allowed us to demonstrate how many areas were already collecting information that could be used for the purpose of assessment. As a public institution, BMCC’s energy use was tracked by the City of New York. Through systemic reporting of the data and a common terminology provided by the SDW taxonomy, facilities were able to make better use of preexisting data. Using existing information as the baseline, facilities demonstrated how a minor change, such as installing energy efficient light bulbs across campus, could significantly reduce energy usage and spending.

The SDW taxonomy provided IEA staff and those within these offices the ability to collaborate on creating missions, goals, and outcomes that were relevant. This meant an assessment plan that was useful and aligned with the work of the office, more efficient planning of future efforts, and a documented demonstration of their contribution to the institution overall. We use these units as examples to showcase how areas normally excluded from institutional effectiveness efforts can have a significant impact in answering stakeholders’ calls for accountability throughout an institution.

The process of developing and implementing the SDW taxonomy also highlighted several important aspects to conducting successful AES assessment. Each AES unit within and across institutions is unique and must be respected as such. Attempting to develop student learning outcomes and using Bloom’s taxonomy is not always appropriate and should not be mandated. It is the flexible structuring and nonacademic terminology of the SDW taxonomy that has been valued by the College’s AES units. It provides appropriate language for assessment that extends beyond direct measurement of student learning, brings assessment of support outcomes to the forefront of assessment planning, and allows all areas within an institution to demonstrate their importance and contribution to achieving the institutional mission.

Next, by developing a process that equally values learning in the classroom with the support for student learning and the environment for student success, BMCC is able to demonstrate that assessment is not the sole responsibility of a sole individual or single office. The broad focus of assessment promotes collective investment in the institutional goals, whether directly related to student learning or not. As illustrated with the framework, assessment matters at every level and ties back to the institution’s mission. Therefore, every unit is responsible for assessment and showing how they contribute to the achievement of BMCC’s mission. As many missions include aspects indirectly or tangentially concerned with direct student learning, maintaining a comprehensive assessment structure that values all units and departments is essential to institutional investment in assessment.

Finally, a common assessment language encourages a common understanding of expectations. At BMCC, from the Cabinet to individual staff, we strove to build a culture of assessment that informed everyone of expectations and made the process a collective effort. This included ongoing one‐on‐one meetings with each AES unit; workshops on various assessment topics; resources shared in print and made available on the OIEA website; creating an AES assessment committee to build a culture of peer assessment; and explicit support in the form of verbal and written communications from the President and his Cabinet.

Implications

In this chapter, we have shared information on the role of institutional effectiveness in assessment, provided context for how AES assessment has traditionally been approached, and shared how the framework and tools developed at BMCC have significantly changed the landscape of AES assessment. Most importantly, we have presented a taxonomy that has now been used by a number of colleges seeking to initiate or enhance AES assessment. To our knowledge, the SDW taxonomy is the only such resource available to individuals seeking to encourage the development of support outcomes. For those in senior leadership who are responsible for assessment at their institution, we believe that these tools, when adapted appropriately, can serve as a catalyst for building understanding and ownership in assessment. This is especially true for those areas that may not currently be engaged with any formal approaches to evaluation. At BMCC, the institutional shift was driven by individuals at every level, but especially those in senior leadership. Having frequent conversations about assessment in all areas promotes its importance.

For those that work in assessment, institutional research, or institutional effectiveness offices, we offer the BMCC IE plan, assessment and planning framework, and the SDW taxonomy and supplemental materials as resources to support building on or beginning the work of AES assessment. We believe that to be truly successful, institutional effectiveness means that each individual at the institution can play a role in demonstrating and striving toward achievement of the mission.

Through the introduction of these resources, we believe any institution can establish a comprehensive institutional effectiveness approach, supported by sound assessment, which can aid institutions in enhancing their AES assessment efforts and increase the usefulness, relevance, and applicability of assessment for the purpose of improving student learning and the environment that supports student success.

Author(s)

Marjorie L. Dorimé‐Williams, Assistant Professor, Department of Educational Leadership and Policy Analysis, University of Missouri.

Christopher Shults, Dean of Institutional Effectiveness & Strategic Planning; MSCHE Accreditation Liaison, Borough of Manhattan Community College, CUNY.

References

  • Alfred, R. L. (2011). The future of institutional effectiveness. New Directions for Community Colleges, 153, 103– 113.
  • ACPA, College Student Educators International. (2018). Commission for assessment and evaluation. Retrieved from http://www.myacpa.org/commae
  • Astin, A. W., Banta, T. W., Cross, K. P., El‐Khawas, E., Ewell, P. T., Hutchings, P., … Moran, E. (1992). Principles of good practice for assessing student learning. AAHE Bulletin, 45( 4), 1– 20.
  • Boslaugh, S. (2019). Bloom’s Taxonomy. Salem Press Encyclopedia. Retrieved from http://search.ebscohost.com.proxy.mul.missouri.edu/login.aspx?direct=true&db=ers&AN=89677526&site=eds-live&scope=site
  • Brittingham, B. (2009). Accreditation in the United States: How did we get to where we are? New Directions for Higher Education, 145, 7– 27.
  • Busby, K. (2015). Co‐curricular outcomes assessment and accreditation. New Directions for Institutional Research, 164, 39– 50.
  • Council for the Advancement of Standards in Higher Education (CAS). (2015). CAS learning and development outcomes. In J. B. Wells (Ed.), CAS professional standards for higher education ( 9th ed.). Washington, DC: Council for the Advancement of Standards in Higher Education.
  • Elkins, B. (2015). Looking back and ahead: What we must learn from 30 years of student affairs assessment. New Directions for Student Services, 151, 39– 48.
  • Ewell, P. T. (2011). Accountability and institutional effectiveness in the community college. New Directions for Community Colleges, 153, 23– 36.
  • Henning, G. W., & Roberts, D. (2016). Student affairs assessment: Theory to practice. Sterling, VA: Stylus Publishing, LLC.
  • McGuire, M., & Silvia, C. (2009). Does leadership in networks matter? Examining the effect of leadership behaviors on managers perceptions of network effectiveness. Public Performance & Management Review, 331, 34– 62.
  • Middaugh, M. F. (2010). Planning and assessment in higher education. San Francisco, CA: Jossey‐Bass.
  • NASPA, Student Affairs Administrators in Higher Education. (2018). Assessment and evaluation. Retrieved from https://www.naspa.org/focus-areas/assessment-and-evaluation
  • Roberts, D. C. (2012). The student personnel point of view as a catalyst for dialogue: 75 years and beyond. Journal of College Student Development, 531, 2– 18.
  • University of Texas Arlington. (2014). Unit effectiveness process assessment handbook. Retrieved from http://www.uta.edu/ier/UEP/docs/UEPAssessmentHandbook_Updated%2010-25-16.pdf
  • Upcraft, M. L., & Schuh, J. H. (1996). Assessment in student affairs: A guide for practitioners. San Francisco, CA: Jossey‐Bass.
  • US Department of Education. (1983). A nation at risk: The imperative for educational reform. Washington, DC: The National Commission on Excellence in Education.
No comments yet.

Leave a Reply

This site uses Akismet to reduce spam. Learn how your comment data is processed.

Powered by WordPress. Designed by WooThemes

Skip to toolbar