Mapping the Response to AI and Its Impact on Assessment Redesign Through Document Analysis


This paper investigates the response of higher education institutions to the integration of generative artificial intelligence (GenAI) in assessment design, prompted by the advent of tools like ChatGPT, Claude, Gemini, and Bard. Through a document analysis of 135 English-speaking websites, including university and educational organization guidelines, the study identifies and discusses emerging trends and approaches to incorporating GenAI into educational assessments. It scrutinizes how universities and organizations initially address the impact of GenAI, focusing on the guidance provided to instructors. The analysis reveals seven principal themes: the terminology used to describe AI’s role in assessment, provisional principles guiding its use, advice to instructors on implementing GenAI, the opportunity to refocus on the purpose of assessment, examples and roles of AI in assessment, relevant theories and pedagogies, and the recognition of this period as the initial wave of GenAI integration in education. The paper highlights the diverse approaches and formats of these guidelines and emphasizes the need for continuous adaptation and reevaluation as the understanding of GenAI in education evolves. This study serves as a foundational exploration of the initial institutional responses to GenAI, setting the stage for future research and development in this rapidly changing field.

Since the release of ChatGPT ( in November 2022, and later more generative AI (GenAI) tools such as Claude (, Gemini (, and Bard (, universities and organizations have been striving to create guidelines for instructors on how to use GenAI in their assessment design. The guidelines vary in their language and goals, and they will undoubtedly change as the understanding of GenAI and its use develops.

In this paper, assessment is used to describe the systematic process of evaluating and measuring student learning. It includes formative and summative assessment as well as assessment for learning, as learning, and of learning that take place at the course level.

This paper investigates these initial guidelines and outlines how pioneer universities and educational organizations have addressed the presence of GenAI and its impact on assessment. My research question was “what guidance related to GenAI and assessment do higher education institutions provide to their instructors?” My findings create a starting point from which the use of AI in assessment design can be tracked to inform future updates to institutional guidelines.

I conducted this analysis on publicly available online guidance documents from higher education institutions and educational organizations using online search techniques, listservs, and a Padlet on GenAI (CETL et al., 2023). Document analysis is the process of examining and evaluating documents to extract useful information, gain insights, or make informed decisions “in such a way that empirical knowledge is produced and understanding is developed” (Bowen, 2009, p. 33). Thematic analysis can be considered a form of pattern recognition with the data. This analysis takes emerging themes and makes them into categories used for further analysis, making it a useful practice for grounded theory. It includes careful, focused reading and rereading of data, as well as coding and category construction.

Exclusion and Inclusion Criteria
I reviewed a total of 135 English-speaking websites for the analysis, of which 98 (73%) had content related to AI and assessment and, notably, 37 (27%) did not. The content had to pertain to institutional approaches to GenAI and assessment. I excluded course-level syllabi, opinion pieces, and policies that did not explicitly mention assessment. I included guidance or policy documents created by university teaching and learning centers, provost offices, or specific faculties. In addition to guidance from universities from around the world, I also reviewed that of educational organizations such as UNESCO, EDCan Network, Jisc, the Tertiary Education Quality and Standards Agency, and the European Network for Academic Integrity Working Group Centre for Academic Integrity (E-CAIU).

The analysis revealed that universities took diverse approaches to the creation of these guidance documents, their formatting, and the messages portrayed. Broadly speaking, documents related to GenAI could be grouped into policy documents, resources for students, resources for staff, and blogs. Some universities had webpages dedicated to AI and assessment, others had FAQ pages, and still others had links to external resources (e.g., Georgetown referenced Vanderbilt and Yale) or adopted a regional approach (e.g., Australasian Academic Integrity Network, 2023). Macquarie University in Australia had resources that needed a username and password to be accessed. The analysis is summarized in seven themes: (a) terminology; (b) provisional principles; (c) advice to instructors; (d) an opportunity to refocus on the purpose of assessment; (e) examples, uses, and role of AI; (f) theories, pedagogies, concepts, and analysis; and (g) first wave.

The terminology used to describe the relationship between AI and assessment varied among institutions. Differences were closely related to the role of GenAI in assessment or the relationship between GenAI and assessment. Some terminology acknowledged the important role of AI in assessment, such as “AI-enabled assessment,” “AI-driven assessments,” “GenAI enhanced,” “AI-based,” and “advancing assessment with AI.” Other sources, such as Princeton University (McGraw Center for Teaching & Learning, n.d.), explained that AI had only a minimal impact, being only a compiler. Some wording demonstrated the relationship people have with AI, such as the phrase “assessments for an AI-enabled world” used at University College London (n.d. , para. 1).

Some documents used terminology that did not include GenAI at all, but rather referred to new ways of thinking about assessment given the GenAI disruption, such as forward-thinking assessment (Volante et al., 2023), rethinking assessment practices (Assessment, AI and Academic Integrity, n.d.), assessment reform (Jisc, 2023), and future-focused assessment (Monash University, n.d.). Notably, Anohina (2005) found the same phrasing when analyzing the terms used to describe virtual learning.

In 2005, Donohue and Howe-Steiger described terms related to e-learning as a cacophony of jargon. The same description could be applied in 2023 for terms related to GenAI and assessment. Nevertheless, it is important to analyze these terms because they have their own nuance. Instructors and institutions are dealing with a recent phenomenon that does not yet have established definitions. A deeper analysis of this terminology would therefore improve understanding of how the role of AI is perceived, how it is projected to impact assessment design, and what links exist between terminology, identity, and learning context.

Provisional Principles
The second theme present in the guidance documents was a set of “provisional principles,” a term borrowed from McMaster University (2023, n.d.). These principles highlight the complex relationship between academia and GenAI, acknowledging both the risks and opportunities this technology presents and emphasizing the varied reactions and expectations it elicits. The first provisional principle I found in my data refers to GenAI’s multifaceted impact. The University of Guelph (2023) exemplified this multifaceted impact, speculating that the application of GenAI will be influenced by different disciplinary cultures.

A second principle, in terms of usage policies, is a consensus against outright banning of GenAI in assessments. Although some educators may consider implementing a ban in full, it is important to weigh whether such a prohibition might inadvertently restrict GenAI as an assistive tool—or whether it could even be enforced. Detection of unauthorized GenAI use is generally deemed unfeasible (University College London, n.d.), yet institutions such as Singapore Management University (2023) have still advocated for monitoring where possible and University of Delaware (Guidry, 2023) provided links to two detection tools.

A third principle acknowledges GenAI’s growing role in education and the need to adjust academic spaces accordingly. The projected omnipresence of AI and its tools in one’s everyday life, as the University of Alberta (Centre for Teaching and Learning, n.d.-a) has noted, necessitates adaptation in academic spaces as part of teaching modern learners. University of Warwick (Fisher et al., 2023) further underscored the linkage between technology and pedagogies, advocating for their joint consideration in developing comprehensive assessment strategies.

Transparency was also considered an important principle. Recommendations included providing clear reasons behind the decisions of the instructors (Centre for Teaching and Learning, University of Alberta, n.d.-a, n.d.-b), demonstrating to the learners that instructors are also learning about AI (Centre for Learning and Teaching, n.d.), and designing for transparency (Teaching and Learning Resource Center, n.d.).

For the most part, the use of GenAI in assessment has been put at the discretion of instructors. Therefore, its utilization varies. This variation is indicative of the far-reaching consequences of any policy. The guidance documents I reviewed provided a range of actions that could be taken but did not specify any one action that should be followed by everyone (e.g., Cornell University, n.d.; Institute for Teaching and Learning Innovation, n.d.; The CIEL Blog, n.d.). The documents did include some directions for instructors; for example, the British Columbia Institute of Technology (n.d.) advised, “If an assignment is easily done by an automated response system, is it worth asking students to do it?” (p. 3).

Among the universities whose guidelines were included in this analysis, some admitted that AI will have a role in preparing students for future careers. University of Technology Sydney (LX Team, 2023), for example, has promoted the concept of “AI-ready” lawyers. This forward-thinking approach includes developing GenAI literacy among students and staff. The Russell Group (n.d.) emphasized equipping staff to support students in using AI tools effectively and appropriately. This preparation is integral to ensuring graduates are equipped for a technologically advanced workforce.

A final principle I uncovered is a cautionary one against prioritizing assessment security over authentic learning experiences, equity, and student well-being (e.g., Centre for Pedagogical Innovation, n.d.). This principle encourages educators to consider how AI aligns with learning outcomes, course content, degree-level expectations, experiential learning, learning assessment outcomes, and work-integrated learning. The intent is to balance technological advancements with core educational values, ensuring that assessments are fair, relevant, and beneficial to all students.

Advice to Instructors
Most of the guidance documents I reviewed had a section that contained advice for instructors, constituting an institutional response to instructor expectations. Much as what happened during the COVID-19 pandemic, faculty members are looking for guidance and advice. By providing this advice, universities have also acknowledged instructors’ need for it. As part of the wide array of advice, different approaches were used to describe how instructors might allow the use of GenAI in their assessment. Some documents included the benefits and limitations of AI in assessment (Teaching and Learning Resource Center, n.d.). Advice on wording was a prevalent topic (e.g., Centre for Teaching and Learning, University of Alberta, n.d.-b), and many sources included examples of wording in syllabi, such as Georgetown University (Center for New Designs in Learning and Scholarship, n.d.-a, n.d.-b) and University of Delaware (Guidry, 2023). Many universities provided different usage scenarios. For example, if instructors at the University of Alberta chose to allow the use of GenAI for brainstorming, they could use the following sentence:

You are asked to use Generative AI tools in this course. AI use will, however, be dependent on assignment and assessment requirements. Please follow all assessment task-specific directions and guidance as provided. If you have any questions or concerns, please do not hesitate to ask during office hours or after class. (Centre for Teaching and Learning, University of Alberta, n.d.-b. Sample Statement A)

The documents also contained suggestions that instructors could share with their students on how to acknowledge the use of AI. Some guidelines gave advice on the choice of the assessment itself (Instructional Technology and Design Services, n.d.; Liu & Bridgeman, 2023a). Many documents provided additional GenAI tool suggestions that instructors could explore and recommend to their learners. Some universities had built versions of GenAI that could serve certain purposes. Liu and Bridgeman (2023a), at the University of Sydney, for instance, had a unique resource that explained how instructors could assess students’ use of AI. The resource contained rubrics that evaluated the use of AI in the assessment, among other elements.

Georgetown University (Center for New Designs in Learning and Scholarship, n.d.-a) included a list of questions that instructors could ask themselves when designing their assessment, such as “How might students use AI tools while working on this assignment? How might AI undercut the goals of this assignment? How could you mitigate this? How might AI enhance the assignment? Where would students need help figuring that out?” (Questions to ask section, para. 1).

Another recurring piece of advice, such as at Macquarie University (Kozar, 2023), was to ask students to keep an audit trail or logbook of how they had used GenAI throughout the course. Instructors were advised to ask the learners to reflect on their logbooks and receive a grade on that reflection. In fact, some universities recommended that the instructors learn from their students and be even more active in asking the students how they are using AI, such as Ohio State University (Teaching and Learning Resource Center, n.d.).
Of interest, Chapman University (n.d.) was the only university in the list that discussed accessibility. In a section on how AI technologies might help people with communication disabilities, the university provided examples of how GenAI could make assessment more equitable. As GenAI becomes more relevant and understanding of its use grows, there should be more focus on how to use GenAI for greater equity.

An Opportunity to Refocus on the Purpose of Assessment
As the University of Lethbridge (Shapiro, 2023) has noted, educators must rethink assessment methods. Refocusing on the purpose of assessment is a theme that represents a new direction for assessment. This notion allows instructors to move away from policing students’ use of GenAI and instead focus on the assessment itself and consequently on students’ learning. Some universities have differentiated between short- and long-term approaches to assessment redesign, highlighting that the long-term approach should be to redesign assessment entirely (King’s College London, n.d.). Most of the universities have focused on giving instructors advice and resources to design alternative ways of assessing students.

This principle also included the concept of iterative assessment (Centre for Pedagogical Innovation, n.d.), scaffold assessment (Centre for Teaching, Learning and Technology, 2023), and nested or staged assessment (Assessment, AI and Academic Integrity, n.d.) that allow learners to learn during the assessment and allows the instructors to provide adequate feedback. Refocusing on the purpose of assessment entails the creation of meaningful assessments in which students can extend or expand upon any content created by GenAI (e.g., Liu & Bridgeman, 2023a). It also entailed designing authentic assessments—a theme found in the majority of the documents.

Rethinking also means focusing on employability. Instructors have been advised to rethink what skills would make their graduates more employable and design their assessments around those skills (Khan, 2023; Kinash et al., 2018; Shapiro, 2023). Assignments that can be easily completed by GenAI systems are of questionable value (British Columbia Institute of Technology, n.d.). Ohio State University (Teaching and Learning Resource Center, n.d.) remarked upon the undeniable reality of the implications of AI in education and the need to address them. These perspectives collectively highlight the flexibility and thoughtfulness required in incorporating GenAI into academic settings, as well as the opportunities for innovation in course and assessment design.

Examples, Uses, and Role of AI
Most of the documents in my analysis included examples of how instructors are using, or could be using, GenAI in their assessment. Some universities provided real examples from the university itself (Centre for Teaching, Learning and Technology (2023), Liu & Bridgeman, 2023a). This use of context-specific examples is an indication of a learning organization (Kools & Stoll, 2016). It also offers the potential for social learning (Kendal et al., 2005), allowing the instructors to learn what is happening at their own university (Shettleworth, 2009). A second type of example was the provision of links to examples at other universities, like the ones found at Chapman University (n.d.) and the University of Delaware (Guidry, 2023). A third variation was hypothetical examples of what instructors could include and comparisons between what an assignment might look like before and after integrating GenAI (e.g., Center for New Designs in Learning and Scholarship, n.d.-a). Georgetown University (Center for New Designs in Learning and Scholarship, n.d.-b) also included the rationale that instructors might use while choosing one approach or the other for the use of AI in assessment.

Along the same lines, Flinders University (n.d.) described the different affordances of AI in assessment: AI in the planning stages of a task, AI as a core part of the task, AI for self-testing, and AI as a copyediting tool. Other universities described the different roles that AI can take in the assessment: AI as initial support for ideation and brainstorming (Liu & Bridgeman, 2023a), AI as feedback support, AI as content generator for creativity and research (Centre for Teaching, Learning and Technology, 2023), AI as optimizer for rubrics and assessments, AI as a tool for self-directed learning (Chapman University, n.d.), and AI as a copilot (Liu & Bridgeman, 2023b). Similarly, some universities described the tasks that students could do while using AI, such as analyzing GenAI output, assessing the quality of GenAI output, and comparing GenAI output, among other examples (e.g., Instructional Technology and Design Services, n.d.).

Some universities provided links to specific GenAI tools that instructors could direct their students to. Centre for Teaching, Learning and Technology (2023) suggested that instructors explore Perplexity ( and Tufts (Center for the Enhancement of Learning and Teaching, 2023) recommended that instructors explore Elicit (, Consensus (, and AI Essay Writer ( This theme is a large one, and deeper analysis is needed to differentiate between the different roles that GenAI can play. Such an analysis is beyond the scope of this paper but will be the focus of future research paper.

Theories, Pedagogies, Concepts, and Analysis
This theme had a minor presence in the data. A few documents capitalized on the shortcomings of GenAI, such as one from UC Berkeley (n.d.). Tufts University linked the use of GenAI to theories of change (Latour, 1984) and explained that the decision to use GenAI in assessment could comprise the following steps: resisting and deflecting, reflecting, and adapting, embracing, and redesigning. Relational pedagogies such as ethics of care, pedagogy of care, connectedness (Adams, 2018; Noddings, 2005), and trauma-informed pedagogy (Venet, 2021) were also mentioned as ways to create more meaningful assessments such as at Brock University (Centre for Pedagogical Innovation, n.d.) and Dalhousie University (Centre for Learning and Teaching, n.d.).

Some of the documents included a focus on students, student partnerships, and the important role that students play in the assessment process. For example, Assessment, AI and Academic Integrity. (n.d.) at the University of Melbourne dedicated multiple pages to highlight the experiences of students. Jisc (2023) also published a report about students’ perceptions of the use of AI in assessment.

The University of Maryland (2023) included the constructive alignment concept (Biggs, 1996), in which learning outcomes are clarified before teaching takes place, as a priority when designing assessments. E-CAIU (Khan, 2023) also addressed the importance of revisiting assessments to make sure they align with learning outcomes. As well, the learning outcomes themselves might need to be reviewed UNESCO, 2023). The enactment of these theories, pedagogies, and concepts as a result of GenAI is also an important field of study, especially compared to the surge in the use of relational pedagogies during the pandemic.

First Wave
The seventh theme revealed in the document analysis captures the idea that this is the first wave of guiding documents. For example, the University of Alberta (Centre for Teaching and Learning, n.d.-b) specified that “the content below presents initial guidance for addressing AI integration in teaching while prioritizing student learning and assessment” (para. 1). Many sources included statements indicating that these guidelines would be regularly reviewed and revised, such as this one from the University of Alberta (Centre for Teaching and Learning, 2023):
Be aware: Due to rapid iterations and advances in Generative AI technologies and tools, teaching and learning advances using AI are in a near-constant state of flux. Consequently, the advice and suggestions provided here reflect best practices in the current moment. (para. 5)

Similarly, Ohio State University (Teaching and Learning Resource Center, n.d.). added this statement “The insights and guidance provided in this teaching topic will evolve as new information emerges around AI tools and their impact on teaching and learning” (para. 2).

Additionally, documents included invitations for instructors to share how they are using AI in their assessment. Those statements could indicate that institutions are moving toward a learning organization model. However, more research is needed to determine whether these universities will update themselves by undertaking continuous learning cycles (Yang et al., 2004).

This paper has presented a summary of seven themes that emerged from the document analysis of guidance documents on the use of GenAI in assessment in higher education institutions. Given that the sources represent the first wave of documents, a second document analysis will need to be conducted to understand how these themes are evolving as instructors’ and students’ understanding of GenAI evolves. In addition, future research should include a deeper analysis of each of the themes.


Dr. Eliana El Khoury is an assistant professor at Athabasca University. She researches alternative methods of assessment. Her research agenda focuses on AI in assessment, equity in assessment, and open educational resources as assessments.



Adams, K. (2018). Relational pedagogy in higher education [Doctoral dissertation, University of Oklahoma]. OU Dissertations.

Anohina, A. (2005). Analysis of the terminology used in the field of virtual learning. Journal of Educational Technology & Society, 8(3), 91–102.

Biggs, J. (1996). Enhancing teaching through constructive alignment. Higher Education, 32(3), 347–364.

Bowen, G. A. (2009). Document analysis as a qualitative research method. Qualitative Research Journal, 9(2), 27–40.

Donohue, B. C., & Howe-Steiger, L. (2005). Faculty and administrators collaborating for e-learning courseware. Educause Quarterly, 28(1), 20–32.

Kendal, R. L., Coolen, I., van Bergen, Y., & Laland, K. N. (2005). Trade‐offs in the adaptive use of social and asocial learning. Advances in the Study of Behavior, 35, 333–379.

Kinash, S., McGillivray, L., & Crane, L. (2018). Do university students, alumni, educators and employers link assessment and graduate employability? Higher Education Research & Development, 37(2), 301–315.

Kools, M., & Stoll, L. (2016). What makes a school a learning organisation? (OECD Education Working Papers). OECD.

Latour, B. (1984). The powers of association. The Sociological Review, 32(1_suppl), 264–280.

Noddings, N., (2005). The challenge to care in schools: An alternative approach to education (2nd ed.). Teachers College Press.

Shettleworth, S. J. (2009). Cognition, evolution, and behavior. Oxford University Press.

Venet, A. S. (2021). Equity-centered trauma-informed education. W. W. Norton & Company.

Yang, B., Watkins, K. E., & Marsick, V. J. (2004). The construct of the learning organization: Dimensions, measurement, and validation. Human Resource Development Quarterly, 15(1), 31–55.

Anselmo, L., Kendon, T., & Moya, B. (2023, February). A first response to assessment and ChatGPT in your courses. Taylor Institute for Teaching and Learning, University of Calgary.

Assessment, AI and Academic Integrity. (n.d.). Using AI to enhance assessment. University of Melbourne.

Australasian Academic Integrity Network. (2023, May). Summary of institutional responses to the use of generative artificial intelligence (Version 1.1).

Bommenel, E., & Forsyth, R. (2023). The potential impact of AI tools on assessment. Lunds University.

British Columbia Institute of Technology. (n.d.). An introduction to generative AI tools.

Burk, A. (2023). What does AI mean in your classroom? The Ciel Blog.

Carleton University. (2023). Recommendations and guidelines.

Carter, M. (2020, October 6). Professor Paul Fyfe brings a humanistic approach to data. North Carolina State University.

Center for New Designs in Learning and Scholarship. (n.d.-a). Assignments. Georgetown University.

Center for New Designs in Learning and Scholarship. (n.d.-b). Policies. Georgetown University.,a%20violation%20of%20academic%20integrity

Center for Teaching Excellence. (n.d.). ChatGPT and generative AI. University of South Carolina.

Centre for Teaching, Learning and Technology. (2023). Assignment and Assessment Design Using Generative AI. University of British Columbia. Assignment and Assessment Design Using Generative AI – AI In Teaching and Learning (

Center for the Enhancement of Learning and Teaching. (2023, Fall). Artificial intelligence resources for Tufts faculty and staff. Tufts University.

Centre for Learning and Teaching. (n.d.). Designing assessments with A.I. in mind. Dalhousie University.–in-mind.html

Centre for Pedagogical Innovation. (n.d.). Designing assessment to mitigate the use of AI writing tools. Brock University.

Centre for Teaching and Learning, University of Alberta. (n.d.-a). AI-squared—artificial intelligence and academic integrity.

Centre for Teaching and Learning, University of Alberta. (n.d.-b). Statements of expectations (syllabus).

Centre for Teaching and Learning, University of Alberta. (2023, July 19). Teaching in the context of AI.

Centre for the Integration of Research, Teaching and Learning. (2023). Short guide 9: Assessment in the age of AI.

CETL, Stephens, C, Taylor, M., & Yadegari, S. (with 8 anonymous contributors). (2023). University policies on generative AI. Padlet.

Chapman University. (n.d.). Artificial intelligence in the classroom: A collection of resources/ideas prepared by CETL.

Clay, G., & Lee, C. W. (2023). Embracing constructive dialogue and oral assessments in the age of AI. Inside Higher Ed.

Compton, M. (2023). Sandpit: Testing the capabilities of ChatGPT—Examples of reading, precising, reformatting text and refs, tabulation, rubric, feedback and marking.–Bqv_FViREBvRzXbD4yR4LzHep15caln4U/edit#heading=h.4uc5kd2g4low

Cornell University (n.d.). AI in Assignment Design.

Crane, K., & Thompson, K. (Hosts). (2023, September 12). Advancing assessment with AI in academia (No.2) [Audio podcast episode]. In Kates Discuss: A Teaching and Learning Podcast. Centre for Teaching and Learning, Dalhousie University.

Digital Futures Institute. (n.d.). Thinking about assessment in the time of generative artificial intelligence. Columbia University.

Education University of Hong Kong. (2023). EdUHK releases pedagogical approaches on AI tools to promote self-regulated learning.

Educator Centre for Academic Teaching and Learning at JU. (n.d.). Preventing and detecting unauthorised use of AI. Jönköping University.

Fischer, I., Mirbahai, L., Buxton, D., Ako-Adounvo, M.-D., Beer, L., Bortnowschi, M., Fowler, M., Grierson, S., Griffin, L., Gupta, N., Lucas, M., Lukeš, D., Voice, M., Walker, M., Xiang, L., Xu, Y., & Yang, C. (2023). How can artificial intelligence (AI) be harnessed by educators to support teaching, learning and assessments? Actionable insights. University of Warwick.

Guidry, K. R. (2023). Considerations for using and addressing advanced automated tools in coursework and assignments. University of Delaware.

Hillier, M. (2023, June 20). Advising students about using and citing generative artificial intelligence for assessment.

Institute for Teaching and Learning Innovation. (2023). Teaching, learning, and assessment with Generative AI.

Instructional Technology and Design Services (n.d.). ChatGPT and Artificial Intelligence

James Cook University. (n.d.). Assessment and artificial intelligence: Information sheet—part 1 of 2.

Jisc. (2023). National centre for AI in tertiary education: Student perceptions of generative AI. University of Manchester.

Khan, Z. R. (2023). Artificial intelligence content generators in education for schools and universities: A good practice guide. European Network for Academic Integrity Working Group Centre for Academic Integrity in the UAE; University of Wollongong in Dubai.

King’s College London (n.d.). King’s guidance on generative AI for teaching, assessment and feedback. King’s guidance on generative AI for teaching, assessment and feedback – King’s College London (

Kozar, O. (2023). Assessments and AI … A three-stage approach. Macquarie University.

Liu, D., & Bridgeman, A. (2023). Student-staff forums on generative AI at Sydney. University of Sydney.

Liu, D., & Bridgeman, A. (2023). What to do about assessments if we can’t out-design or out-run AI? University of Sydney.

Liu, D., Ho, E., Weeks, R., & Bridgeman, A. (2023). How AI can be used meaningfully by teachers and students in 2023. University of Sydney.

LX Team. (2023, August 10). AI case study: Consistent MME projects with Anna Lidfors Lindqvist. University of Technology Sydney.

McGraw Center for Teaching & Learning. (n.d.). Guidance on AI/ChatGPT. Princeton University.

McMaster University. (n.d.). Generative artificial intelligence in teaching and learning.

McMaster University. (2023). Provisional guidelines on the use of generative AI in teaching and learning.

Meenakumari, J. (2021). Harnessing the power of artificial intelligence for summative and formative assessments in higher education. EdTechReview.

Monash University. (n.d.). Generative AI and assessment.

Mulder, R., Baik, C., & Ryan, T. (2023). Rethinking assessment in response to AI. University of Melbourne.

O’Leary, Z. (2013). The essential guide to doing your research project (2nd ed.). SAGE.

Office of Undergraduate Education. (n.d.). AI guidance & FAQs. Harvard College.

Russell Group. (n.d.). Russell Group principles on the use of generative AI tools in education.

Schmidli, L. (with Harris, M., Caffrey, A., Caloro, A., Klein, J., Loya, L., Macasaet, D., Schock, E., & Story, P.). (2023). Considerations for using AI in the classroom. University of Wisconsin-Madison.

Shapiro, S. (2023). Exploring the impact of generative AI on education: Opportunities, challenges, and ethical considerations. University of Lethbridge.

Singapore Management University. (2023). Use of AI tools in assessment and teaching.

Smart, B., & Botha, C. (2023, March 14). A practical guide to ethical use of ChatGPT in essay writing. Mail & Guardian.

Smith, G. (2023). Transform assessment with AI and GPT: A positive approach for teachers. ThisIsGraeme.

St Mary’s AI Steering Group. (2023). Guidance for staff on artificial intelligence: How to maximize the value of AI in teaching and assessment. St Mary’s University.

Steele, S. (2023, November 21). Generative artificial intelligence (GenAI) assessment statements for students. Faculty Learning Hub, Conestoga College.

Teaching and Learning Resource Center. (n.d.). AI: considerations for teaching and learning. Ohio State University.

Teaching and Learning Support Service. (n.d.). Artificial intelligence (AI). University of Ottawa.

Teaching and Learning Transformation Center. (2023, August). Artificial intelligence (AI). University of Maryland.

The CIEL Blog. (2023). Generative AI and Assessment. Vancouver Island University.

Trust, T. (n.d.). ChatGPT and education. Northern Illinois University.

UC Berkeley. (n.d.). Understanding AI writing tools and their uses for teaching and learning at UC Berkeley.

UCD Teaching & Learning. (2023). Quick guide on generative artificial intelligence in learning and assessment (faculty guidance). University College Dublin.

University College London. (n.d.). Designing assessments for an AI-enabled world.

University of Amsterdam. (n.d.). How to make your assessment more AI-proof.

University of Auckland. (2023). The use of generative AI tools in coursework.

University of Birmingham. (n.d.). Generative artificial intelligence and its role within teaching, learning and assessment.

Flinders University. (n.d.). Good practice guide – Designing assessment for Artificial Intelligence and academic integrity – Flinders University Staff

UNESCO. (2023). Guidance for generative AI in education and research.

University of Glasgow. (n.d.). Learning and teaching: How can I adapt assessment to deal with generative AI?

University of Guelph. (2023). Provisional recommendations for the use of generative AI.

University of Toronto. (n.d.-a). ChatGPT and generative AI in the classroom.

University of Toronto. (n.d.-b). Generative artificial intelligence in the classroom: What is generative artificial intelligence and what application does it have for classroom instruction & learning.

University of Waterloo. (n.d.). Artificial intelligence and ChatGPT.

Volante, L., DeLuca, C., & Klinger, D. A. (2023). Forward-thinking assessment in the era of artificial intelligence: Strategies to facilitate deep learning. Education Canada Network.

No comments yet.

Leave a Reply

This site uses Akismet to reduce spam. Learn how your comment data is processed.

Powered by WordPress. Designed by WooThemes

Skip to toolbar