Two Years of Generative Artificial Intelligence in Higher Education: The Seven Waves of Assessment and GenAI

Since November 2022, when generative artificial intelligence (GenAI) exploded onto the higher education landscape, higher education institutions have undergone significant transformations. GenAI remains in a state of flux, as do the concerns and decision-making processes of educators at these institutions. The past two years have been marked by a series of rapid changes, offering valuable insights into how higher education responds to the emergence of disruptive technologies. This paper examines the shifts that have occurred within higher education, specifically in the context of assessment practices, and highlights the evolving priorities and activities over this period. The aim is to document these developments and extract lessons that can inform future approaches.

This paper, building on a previous analysis of how university policies around the world addressed the use of GenAI in assessment (Elkhoury, 2024), is based on a new analysis of policy as well as my personal experience. With GenAI publicly available, academics and staff at higher education institutions have gone through several waves of behaviors, priorities, and directions. I selected the term wave in this context because it encapsulates the phenomenon being described. A wave initially arrives, capturing the primary focus and dominating attention, often followed by a subsequent wave of similar intensity. Eventually, the wave integrates with the larger body of water, continuing to occur but no longer serving as the central focus.

First Wave: Experimenting, Stressing, and Feeling Betrayed
As with the introduction of any new technology, the first wave of GenAI in assessment sparked a range of reactions and emotions among instructors, including skepticism, imagined affordances, and anticipated affordances (Johannessen, 2024). At one end of the continuum of engagement were instructors who disregarded GenAI, often confident that this technology could not address the specific requirements of their assessments. Somewhere in the middle were the gatekeepers of traditional ways of assessing students. At the opposite end were instructors who actively embraced GenAI, recognizing the importance of guiding their students in its effective use. These instructors also integrated GenAI into their own professional tasks, personal lives, and the development of student assessments. Notably, many of these educators shared their assessment strategies and incorporation of GenAI through publications and other forums (e.g., Yee et al., 2023). To check how vulnerable their assessments were, instructors began asking GenAI to solve those assessments. Instructors reported feeling stressed, mainly from not knowing what to do about GenAI. Instructors also felt betrayed by students not valuing learning and looking for shortcuts.

During this time, rubrics emerged that assessed the use of GenAI and even measured different types of learning. Some instructors added reflection on AI use to their rubrics (Brijmohan, n.d. [University of Toronto Mississauga]) or proof of learning and lifelong learning. In addition, many instructors tried new assessments, deliberately including the use of GenAI (e.g., University of British Columbia, Faculty of Education, n.d.).

Another stressful point for instructors was finding appropriate ways to address dishonesty when GenAI use was suspected but could not be proven. The media and scholarly research suggested ways to detect GenAI in writing, such as overuse of the word “delve”, “synergy”, etc., (Botes, Dewaele, Colling, & Teuber, 2025; Brown, n.d.; Efron, 2024; Losey, 2024) yet instructors were unsure how to respond when a paper was clearly written by GenAI. How could they point it out? Should they tell the students that they are cheating? These uncertainties sparked the initial conversations about academic integrity and plagiarism in a GenAI world.

Second Wave: Drafting GenAI Policies and Guidelines About Assessments
Following the initial wave of concerns, most universities developed and published their own policies on GenAI use. Although not all policies explicitly addressed the role of GenAI in assessment, those that did emphasized the inherent challenges in detecting AI-generated content. These policies typically delegated the responsibility for determining the appropriate use of GenAI to individual instructors, allowing them to tailor its integration based on course requirements. Additionally, many institutions provided sample syllabus language to guide instructors in framing their use of GenAI in assessment.

Many universities, such as Humber College (n.d.) considered their policy to be a living document, indicating their sense that GenAI is here to stay and will change. Similarly, Carnegie Mellon University included in its policy the statement “Because we do NOT yet know how its use will affect student learning or equity” (Eberly Center, n.d.-b, para. 4) to specify that this is a changing space. The University of Maine (n.d.) stated that its guidelines are reviewed and updated at least once a year. The acceptance that policies and guidelines need to be reviewed and revised has elevated universities as learning organizations. It is worth studying how much new technologies are contributing to the agile nature of universities.

University web pages contained links to alternative assessments and recommendations for assessment redesign, such as using transparency in assessment and designing “authentic assessments.” This information allowed instructors who were more comfortable with GenAI or who were looking for alternative assessment options to explore possibilities. It was clear at this point that the guidelines were just a starting point to even deeper change. Guidelines that included references to assessment redesign, such as transparency, authentic assessment, meaningful assessment, and relational assessment, indicated future priorities at higher education institutions.

Another conversation that started to surface during this wave regarded the need to develop new approaches to plagiarism and to rethink academic integrity, often referred to as “the other AI.” This discussion has already changed the definition of plagiarism and indeed the meaning of writing in a GenAI world. The discussion has evolved to include definitions of “post-plagiarism” (Eaton, 2023), “AI plagiarism,” “Grammarly plagiarism,” and “copy and paste plagiarism” (University of British Columbia, Office of the CIO, n.d.), as well as “AI literacy” and tips for using GenAI for students (Algonquin College, 2024). This conversation continues, and universities are currently looking into updating their academic integrity policies.
How different universities, with different values and priorities, have reacted to this disruptive technology has many commonalities with multisite domestication (Johannessen et al., 2024), which will become more obvious as instructors and departments make sense of how GenAI is changing their disciplines.

Third Wave: Advancing Ethical Practices and Redesigning Assessment
Once policies on GenAI were available to instructors, it was time to support them in understanding the possibilities. The role of teaching and learning centers was paramount during this time to help instructors decide how GenAI would look in their own courses. Examples of resources include a decision flowchart at the University of British Columbia (UBC Okanagan, Centre for Teaching and Learning, 2024) and reflection questions at the University of Calgary (Anselmo & Kendon, 2023).

At this point, two new initiatives surfaced. First, universities started to explore the concept of ethical and responsible use of GenAI (McGill University Library, n.d.), opening a more elaborate conversation about what that means. Second, some universities started investigating ways to evaluate assessment design. For example, Southern Cross University (n.d.) created its own approach to revisiting assessment design, and University College London (n.d.) created a tool that walks instructors through multiple steps to review their assessments and make appropriate changes.

Finally, more inclusion of student voices began. Examples include University College Cork’s (n.d.) partnerships with learners, Vancouver Community College’s (Va2024) panels that included students’ experience with GenAI, and the GAITAR program at Carnegie Mellon University, where students are offered paid positions to partner in GenAI decisions (Eberly Center, n.d.-a). This trend suggests that students’ roles are changing from receptors to contributors, indicating thatthe future of higher education may include a greater partnership with students.

During this phase, dialogue about the need to revise learning outcomes began among educators, with institutions taking proactive steps to support faculty. Conestoga College, for instance, amassed a range of resources for instructors to develop and refine learning outcomes (Raji, 2024), and the University of Michigan (n.d.-e) curated a list of skills and competencies for an AI-augmented space. These resources aim to ensure that learning outcomes are aligned with contemporary educational standards and evolving pedagogical approaches, particularly in response to GenAI.

Carrying over from previous waves, experimentation, new approaches to plagiarism, and exploration of assessment redesign continued, while students’ confusion and fear grew about how and when they could use GenAI. The work in this wave began to shift assessment from traditional and disconnected approaches to more holistic ones.

Fourth Wave: Creating, Approving, Buying, and Partnering with GenAI Tools
As understanding of GenAI capacities matured, some instructors dedicated time to testing multiple GenAI tools, or to following others’ updates of these tools, and more GenAI toolboxes began to appear. Universities increasingly began to acquire, approve, or develop their own GenAI tools in response to the growing need for such technologies in educational settings. For instance, the University of Sydney developed a proprietary system called Cogniti (Weber, 2024), McMaster University created the Assessment Partner (Assessment Partner, n.d.), and the University of Calgary created SMARTIE (Sabbaghan, 2024). Similarly, the University of Michigan (n.d.-c) introduced its own GenAI tool called U-M GPT, emphasizing its commitment to equity and accessibility in its design. In contrast, the University of Saskatchewan (n.d.) opted for a different approach by approving SMARTIE and making it available to faculty. Other institutions, such as Arizona State University (n.d.), chose to partner with established companies like OpenAI to integrate GenAI technologies into their academic frameworks.

As students’ confusion and fear mounted, universities sought to clarify use of GenAI tools. For example, the University of Michigan (n.d.-b) posted guidance for students on the appropriate use of GenAI in their assessments. In Edmonton, Concordia University (n.d.) offered even more elaborate instructions to students about GenAI and academic integrity. Some instructors asked students to submit logbooks that demonstrated their use of GenAI and to acknowledge or cite GenAI, like the suggestions in Cornell University’s Artificial Intelligence Disclosure (AID) Framework (Weaver, 2024). For a while, instructors asked students to document and submit all their GenAI interactions, such as the University of Sheffield’s (n.d.) “acknowledge, describe, evidence” template. Initially some instructors graded these interactions, which more recently have become a reference without being graded.

This variety of approaches illustrates the diverse strategies institutions and instructors have adopted to harness the potential of GenAI in enhancing teaching and learning experiences while addressing ethical considerations, fostering transparency, and building trust among students and educators in its use.

Fifth Wave: Fostering Collective Knowledge About GenAI and Assessment
In the fifth wave, as the academic publishing community engaged in ongoing debates about citing GenAI in research articles (University of North Dakota Libraries, n.d.), universities and educators largely accepted that GenAI had become a permanent aspect of the academic landscape. However, considerable uncertainty remained on its effective integration into teaching and learning practices, alongside a recognition of the need for further experience and understanding. In response, teaching and learning centers, communities of practice, and academic departments undertook proactive initiatives to deepen knowledge about GenAI, such as by organizing panels, inviting expert speakers, and hosting conferences dedicated to GenAI implications and applications in academic contexts. These initiatives reflect the importance of developing a more nuanced and informed approach to this transformative technology.

Grassroots approaches also surfaced, such as compilations of examples of AI-driven assessments. Notable examples include Harvard’s AI Pedagogy Lab (metaLAB, n.d.) and Pearls of Wisdom from Oxford Brookes University (Fischer & Gramaglia, n.d.). These resources are valuable repositories for educators seeking practical applications of GenAI in assessment. Further examples of knowledge-building activities tailored towards practical outcomes include the AI Assessment Hackathon at Atlantic Technological University in Ireland (Ginty et al., 2024), where instructors and students collaborated to develop GenAI-powered assessments. Similarly, Queen’s University hosted an AI and Assessment Institute (Queen’s Gazette, 2024), providing instructors with dedicated time to discuss their assessments and explore GenAI integration.

Recognizing the valuable insights instructors possessed through their direct experience, universities began to collect and showcase examples of how faculty members incorporated GenAI into their assessments. For instance, the University of British Columbia, Centre for Teaching, Learning and Technology (n.d.) developed a webpage with examples in disciplines such as writing, engineering, and education. Additionally, many universities implemented website forms for instructors to share their experiences with GenAI integration. These efforts reflected a broader institutional recognition of the need to disseminate knowledge about the evolving role of GenAI in education.

Whereas the first few waves of GenAI discussions were primarily held at the institutional and instructor levels, conversation now shifted to the faculty level. Faculty members realized it was time to foster a common understanding and brainstorm about GenAI. In its latest report, the U.S. Department of Education (2025) offered five recommendations related to GenAI and postsecondary education. Recommendation 2 emphasizes the need to provide spaces for faculty to create cross-department collaborations to make decisions about GenAI (U.S. Department of Education, 2025).

In summary, by the end of the fifth wave, the integration of GenAI into academic settings represented a transformative shift that was reshaping teaching, learning, and assessment practices. The growing emphasis on faculty collaboration highlighted the need for interdisciplinary dialogue and shared decision-making on GenAI’s potential and challenges.

Sixth Wave: Using AI to Support Instructors and Learners
The integration of GenAI into assessment practices continued to garner attention, particularly its potential to assist instructors in various aspects of the teaching process: enhance grading efficiency, facilitate timely and personalized feedback, and enable around-the-clock communication with students. GenAI’s potential to streamline these tasks presented a promising avenue for improving the teaching and learning experience. Several institutions explored these applications. For example, the University of Sydney (2024) investigated the use of GenAI clones—AI systems designed to simulate the cognitive processes of instructors in grading and feedback, potentially reducing educators’ workload while maintaining consistency and accuracy. Other universities turned to GenAI to provide formative and summative feedback to students, enhancing the responsiveness and scalability of feedback mechanisms such as the University of Sydney (2024 – b) and Columbia University (Teachers College, n.d.). Such immediate and consistent feedback can be particularly beneficial in introductory-level courses, where individual feedback may otherwise be limited.

However, the use of GenAI in grading sparked controversy. One prominent case involved the University of Texas, where the use of GenAI tools to grade assignments was trialed (Brodkin, 2025). The adoption of AI for grading raised concerns about accuracy, fairness, and transparency. Critics argued that AI grading systems may struggle to capture the nuances of student work, particularly in subjective or creative fields, thus challenging the notion of equitable evaluation (Kumar, 2023). The ethical implications of delegating grading responsibilities to AI, as well as the potential biases inherent in algorithmic decision-making, remain significant points of debate.

To harness GenAI’s potential, some instructors began incorporating chatbots into classroom assignments. An article published by Harvard Business Publishing Education highlighted how custom chatbots have served as course assistants, assignment tutors, process coaches, and reflective guides (Lindgren, 2024). Similarly, some institutional resources offered detailed instructions on creating tailored chatbots for educational purposes (e.g., Stanford University, Graduate School of Education, n.d.). These initiatives demonstrate the growing interest in leveraging AI tools to enhance instructional practices and support student learning.

In summary, the integration of GenAI into assessment practices offers opportunities and challenges for the education sector. GenAI holds great promises for improving grading efficiency, feedback delivery, and continuous student engagement, yet its application remains contentious due to concerns about fairness, transparency, and the preservation of academic integrity. As institutions explore GenAI technologies and experiment with different tools, they must also foster ongoing collaboration and discussion to address the ethical, practical, and pedagogical implications.

Seventh Wave: Revising Policies for Instructors
At the two-year mark of GenAI in higher education, the seventh wave has emerged. As instructors increasingly adopted GenAI for grading and feedback and ethical questions swirled, universities revised policies to clarify permissible use. Some explicitly prohibited GenAI for grading students’ work (e.g., McMaster University, 2024; Queen’s University, 2024). These policies do allow instructors to employ GenAI to give feedback, provided that the feedback generated is thoroughly reviewed for accuracy and appropriateness. Many other higher education institutions made a 180° change in their approach. A perfect example is the University of Sydney (2024), which canceled its previous GenAI policy and decided that everyone could use AI.

Universities have also offered courses on integrating GenAI into teaching practices. Some are designed for institutional use; others are publicly accessible. A notable example is Auburn University’s (2023) suite of courses on AI in teaching, which includes a dedicated module on redesigning meaningful assessments. Courses also include topics such as digital literacy, ethical awareness, and prompt literacy (University of Michigan, n.d.-d). Courses may be commissioned by universities (e.g., Athabasca University, n.d.) or available through platforms such as Coursera. Such initiatives aim to equip educators with the knowledge and skills required to effectively and ethically incorporate AI technologies into their pedagogical approaches, fostering innovation in teaching and learning.

The conversation about integrity and honesty sparked an overhaul of academic integrity information. New definitions appeared, including Conestoga College’s “circumventing independent learning” (Raji, 2024, para. 2). The concept of weaving GenAI into all courses in a program emerged as a significant initiative, emphasizing the need to examine academic programs holistically and assess the impact of GenAI on entire curricula. For instance, the University of Florida (n.d.) introduced “AI Across the Curriculum,” which integrates AI literacy for all students while embedding specialized AI-focused content within specific departments. Similarly, Barnard College has undertaken efforts to develop an AI literacy framework aimed at equipping students with essential competencies to navigate an AI-driven world (Hibbert et al., 2024). Recognizing the transformative impact of AI on careers and workforce demands, the U.S. Department of Education’s (2025) recent recommendations encourage institutions to review, refine, and supplement their program offerings to align with the evolving landscape.

The academic landscape has transformed remarkably in its approach to integrating GenAI technology into teaching, learning, and assessment. Institutions have moved from initial hesitation to full-scale adoption, yet clear policies, ethical considerations, and robust support systems remain fluid. The shift towards incorporating GenAI across entire curricula highlights the growing recognition of its role in shaping education. As universities continue to navigate the complexities of GenAI’s integration, a balance between innovation and integrity must be maintained, ensuring that advancements benefit both educators and students in meaningful and ethical ways.

Conclusion
Observing these developments provides not only a snapshot of the current state of education but also valuable insights into the trajectory of GenAI within educational contexts, as the structure and methods of implementation inevitably influence the nature and quality of educational content. This idea, often encapsulated by the principle that “form shapes content,” suggests that the ways in which GenAI is integrated into teaching and assessment will significantly impact both pedagogical practices and learning outcomes. In addition to the seven primary waves and their associated trends, adjacent waves are also influencing assessment practices in the GenAI era. Some trends remain nascent, while others quickly became irrelevant. By analyzing these trends, we can better understand how emerging tools and approaches might reshape traditional educational paradigms, fostering innovation while also addressing potential challenges. This perspective underscores the importance of thoughtful design and intentionality in leveraging GenAI to enhance learning and teaching. The third year of GenAI undoubtedly brings new developments, including universities’ focus on integrating diverse GenAI tools, renewed fears concerning the impact of GenAI on learning in particular and on the human brain more broadly, and renewed hopes for well-designed GenAI applications alongside robust assessment tools. Monitoring these changes is essential, as they will illuminate shifting priorities and their implications for our role as educators.

Author

Dr. Eliana El Khoury is an assistant professor at Athabasca University. She researches alternative methods of assessment. Her research agenda focuses on AI in assessment, equity in assessment, and open educational resources as assessments.

 

 

 

 

 

 

Works Cited

Algonquin College, Academic Integrity Office. (2024). Student tips for GenAI use [Infographic]. https://www.algonquincollege.com/academic-integrity/files/2024/06/Student-Tips-for-GenAI-Use.pdf

Anselmo, L., & Kendon, T. (2023, October 10). Exploring artificial intelligence and assessments. University of Calgary, Taylor Institute for Teaching and Learning. https://taylorinstitute.ucalgary.ca/resources/exploring-artificial-intelligence-and-assessments

Arizona State University. (n.d.). AI innovation challenge. https://ai.asu.edu/AI-Innovation-Challenge

Assessment Partner. (n.d.). Assessment partner (beta). McMaster University; MacPherson Institute. https://assessment-partner.com/

Athabasca University. (n.d.). Introduction to AI literacy. PowerED™ by Athabasca University. Retrieved January 30, 2025, from https://powered.athabascau.ca/product?catalog=Introduction-to-AI-Literacy

Auburn University, Office of the Provost. (2023, November 7). AI course extends classroom learning beyond the basics. https://wire.auburn.edu/content/provost/2023/11/07-0900-AI-Course-Extends.php

Botes, E., Dewaele, J.-M., Colling, J., & Teuber, Z. (2025, August 20). Initial indications of generative AI writing in linguistics research publications (Version 1) [Preprint]. PsyArXiv. https://osf.io/preprints/psyarxiv/4yvbp_v1

Brijmohan, A. (n.d.). Rethinking rubrics in the age of generative AI. University of Toronto Mississauga, Robert Gillespie Academic Skills Centre. Retrieved January 24, 2025, from https://www.utm.utoronto.ca/rgasc/media/3442/download?inline

Brodkin, J. (2025, January 29). Texas launches AI grader for student essay tests, says it won’t use ChatGPT. Gizmodo. Retrieved from https://gizmodo.com/texas-launch-ai-grader-student-essay-tests-not-chatgpt-1851397935

Brown, C. (n.d.). 10 telltale signs AI-generated content. LinkedIn. https://www.linkedin.com/pulse/10-telltale-signs-ai-generated-content-clare-brown-0dkdc/

Concordia University of Edmonton. (n.d.). Student guide to generative AI & academic integrity. https://concordia.ab.ca/academic-integrity-at-cue/genai-guide-for-students/

Eaton, S. E. (2023). Postplagiarism: Transdisciplinary ethics and integrity in the age of artificial intelligence and neurotechnology. International Journal for Educational Integrity, 19, 23. https://doi.org/10.1007/s40979-023-00144-1

Eberly Center. (n.d.-a). Eberly Center student partners (ESPs) co-creator program. Carnegie Mellon University. https://www.cmu.edu/teaching/gaitar/eberlystudentpartners.html

Eberly Center. (n.d.-b). Position statement on generative AI tools in teaching and learning. Carnegie Mellon University. Retrieved January 21, 2025, from https://www.cmu.edu/teaching/gaitar/positionstatement.html

Efron, M. (2024, May 21). Words and phrases that make it obvious you used ChatGPT. Medium. https://medium.com/learning-data/words-and-phrases-that-make-it-obvious-you-used-chatgpt-2ba374033ac6

ElKhoury, E. (2024). Mapping the response to AI and its impact on assessment redesign through document analysis. The Assessment Review, 5(1).

Emory News Center. (2023, March 15). Emory launches AI minor, the first of its kind in Georgia. Emory University. https://news.emory.edu/stories/2023/03/er_ai_minor_15-03-2023/story.html

Fischer, I., & Gramaglia, L. (n.d.). Pearls: Generative AI in higher education teaching. Oxford Brookes University; University of Warwick; University of Hull. https://sites.google.com/brookes.ac.uk/genaiheteaching/pearls?authuser=0

Ginty, C., Henry, N., & Antropova, O. (2024). ATU assessment hackathon: Big ideas 2024. Atlantic Technological University. https://www.atu.ie/app/uploads/2024/12/atu-assessment-hackathon-flipbook_compressed.pdf

Giray, L. (2024). The problem with false positives: AI detection unfairly accuses scholars of AI plagiarism. The Serials Librarian, 85(5–6), 181–189. https://doi.org/10.1080/0361526X.2024.2433256

Hibbert, M., Altman, E., Shippen, T, & Wright, M. (2024, June 3). A framework for AI literacy. Educause Review. https://er.educause.edu/articles/2024/6/a-framework-for-ai-literacy

Humber College. (n.d.). Generative artificial intelligence in the classroom. Innovative Learning. Retrieved January 30, 2025, from https://humber.ca/innovativelearning/generative-artificial-intelligence-in-the-classroom/

Johannessen, L. E. (2024). Anticipated affordances: Understanding early reactions to new technologies. New Media & Society, 26(12), 6900–6917. https://doi.org/10.1177/14614448231161512

Johannessen, L. E., Nordtug, M., & Haldar, M. (2024). Multi-site domestication: Taming technologies across multiple institutional settings. Information, Communication & Society, 27(11), 2077–2093. https://doi.org/10.1080/1369118X.2023.2255644

Kumar, R. (2023). Faculty members’ use of artificial intelligence to grade student papers: A case of implications. International Journal for Educational Integrity, 19(1), 9. https://doi.org/10.1007/s40979-023-00130-7

Lindgren, T. (2024, May 15). How to create custom AI chatbots that enrich your classroom: Four examples to get you started. Harvard Business Publishing Education. https://hbsp.harvard.edu/inspiring-minds/how-to-create-custom-ai-chatbots-that-enrich-your-classroom

Losey, R. (2024, April 5). Stochastic parrots: How to tell if something was written by an AI or a human? e-Discovery Team. https://e-discoveryteam.com/2024/04/05/stochastic-parrots-how-to-tell-if-something-was-written-by-an-ai-or-a-human/

McGill University Library. (n.d.). Generative AI in teaching and learning: Course module M04: Responsible use considerations. McGill University. https://www.library.mcgill.ca/genai/course.html?id=m04

McMaster University, Office of the Provost. (2024, August). Guidelines on the use of generative AI in teaching and learning. https://provost.mcmaster.ca/office-of-the-provost-2/generative-artificial-intelligence-2/task-force-on-generative-ai-in-teaching-and-learning/provisional-guidelines-on-the-use-of-generative-ai-in-teaching-and-learning/
metaLAB (at) Harvard. (n.d.). AI pedagogy project. https://aipedagogy.org/

Nuala. (2023, March 9). Visualising programme-level assessment. Newcastle University, Learning and Teaching @ Newcastle Blog. https://blogs.ncl.ac.uk/ltdev/2023/03/09/visualising-programme-level-assessment/

Queen’s Gazette. (2024, September 9). An inside look at teaching and learning at Queen’s. Queen’s University. https://www.queensu.ca/gazette/stories/inside-look-teaching-and-learning-queen-s

Queen’s University, Office of the Provost and Vice-Principal (Academic). (2024, October 15). Guidance on the use of generative artificial intelligence in student assessment. https://www.queensu.ca/provost/sites/provwww/files/uploaded_files/Teaching%20and%20Learning/Guidance%20on%20the%20use%20of%20generaI%20artificial%20intelligence%20in%20student%20assessment.pdf

Raji, M. (2024, July 10). Rethinking academic integrity in the age of generative artificial intelligence. Conestoga College, Teaching and Learning at Conestoga. https://tlconestoga.ca/rethinking-academic-integrity-in-the-age-of-generative-artificial-intelligence/

Sabbaghan, S. (2024, April 10). Hello! I’m SMARTIE: Strategic module assistant for rubrics, tasks, and inclusive education. University of Calgary. https://www.smartie.dev/

Southern Cross University. (n.d.). Considering generative artificial intelligence (GenAI) in assessment design. Spark Knowledge Base. Retrieved January 24, 2025, from https://spark.scu.edu.au/kb/tl/considering-generative-artificial-intelligence-genai-in-assessment-design

Stanford University, Graduate School of Education. (n.d.). Classroom resources: Designing your own chatbot. Retrieved January 24, 2025, from https://teachingresources.stanford.edu/resources/designing-your-own-chatbot/

Teachers College, Columbia University. (n.d.). AI in education guide: Using AI for feedback. Digital Futures Institute. Retrieved January 30, 2025, from https://www.tc.columbia.edu/digitalfuturesinstitute/learning–technology/instructional-guides–resources/self-paced-learning-guides/ai-in-education-guide-using-ai-for-feedback/

The University of Sydney. (2024). How Sydney educators are building AI doubles of themselves to help their students. Teaching@Sydney. Retrieved January 30, 2025, from https://educational-innovation.sydney.edu.au/teaching@sydney/how-sydney-educators-are-building-ai-doubles-of-themselves-to-help-their-students/

The University of Sydney. (2024-b). How generative AI can make personalised feedback at scale more consistent and efficient. Teaching@Sydney. Retrieved January 30, 2025, from https://educational-innovation.sydney.edu.au/teaching@sydney/how-generative-ai-can-make-personalised-feedback-at-scale-more-consistent-and-efficient/

UBC Okanagan, Centre for Teaching and Learning. (2024, April 11). Deciding when to integrate generative AI into an assignment. https://ctl.ok.ubc.ca/2024/04/11/deciding-when-to-integrate-generative-ai-into-an-assignment/

University College Cork. (n.d.). AI2ED: Artificial intelligence & academic integrity. Centre for the Integration of Research, Teaching and Learning (CIRTL). Retrieved January 30, 2025, from https://www.ucc.ie/en/cirtl/projects/national/ai2edartificialintelligenceacademicintegrity/

University College London, UCL Teaching & Learning. (n.d.). How to review assessment: FAQs. Retrieved January 24, 2025, from https://www.ucl.ac.uk/teaching-learning/how-review-assessment-faqs

University of British Columbia, Centre for Teaching, Learning and Technology. (n.d.). Assessment design using generative AI. Retrieved January 24, 2025, from https://ai.ctlt.ubc.ca/assessment-design-using-generative-ai/

University of British Columbia, Faculty of Education. (n.d.). AI corner. Retrieved January 24, 2025, from https://learningdesignviews.educ.ubc.ca/ai-corner/

University of British Columbia, Office of the CIO. (n.d.). Principles for the use of generative AI tools. Retrieved January 24, 2025, from https://genai.ubc.ca/guidance/principles/

University of Florida. (n.d.). Artificial intelligence across the UF curriculum. https://ai.ufl.edu/media/aiufledu/resources/AI-Across-the-Curriculum.pdf

University of Maine. (n.d.). Generative AI teaching and learning guidelines. Retrieved January 21, 2025, from https://umaine.edu/communitystandards/resources-for-faculty/generative-ai-teaching-and-learning-guidelines/

University of Michigan. (n.d.-a). Course and assignment (re-)design. Retrieved January 21, 2025, from https://genai.umich.edu/resources/faculty/redesigning-assessments

University of Michigan. (n.d.-b). GenAI in-depth: Specific implications for writing and other disciplines. Retrieved January 21, 2025, from https://genai.umich.edu/in-depth/implications-for-writing

University of Michigan. (n.d.-c). ITS AI services. Retrieved January 21, 2025, from https://its.umich.edu/computing/ai

University of Michigan. (n.d.-d). Prompt literacy in academics. Retrieved January 21, 2025, from https://genai.umich.edu/resources/prompt-literacy

University of Michigan. (n.d.-e). U-M guidance for students. Retrieved January 21, 2025, from https://genai.umich.edu/resources/students

University of North Dakota Libraries. (n.d.). Citing and publishing AI-generated content. AI Resources. Retrieved January 30, 2025, from https://libguides.und.edu/ai-resources/citing_and_publishing

University of Saskatchewan. (n.d.). SMARTIE. https://teaching.usask.ca/learning-technology/tools/smartie.php

University of Sheffield. (n.d.). How to use GenAI for assessment. Retrieved January 24, 2025, from https://www.sheffield.ac.uk/study-skills/digital/generative-ai/assessment

University of Sydney. (2024, November 27). University of Sydney’s AI assessment policy: protecting integrity and empowering students [News release]. https://www.sydney.edu.au/news-opinion/news/2024/11/27/university-of-sydney-ai-assessment-policy.html

U.S. Department of Education, Office of Educational Technology. (2025, January). Navigating artificial intelligence in postsecondary education: Building capacity for the road ahead. https://tech.ed.gov/files/2025/01/OS-24-002833-AI-in-Postsecondary-Ed.pdf

Vancouver Community College. (2024, January 4). 2024 Symposium – AI Panel [Video]. VCC MediaSpace. https://mediaspace.vcc.ca/media/2024+Symposium+%2801+04%29+-+AI+Panel/0_hgycj2h4/191197

Weaver, K. D. (2024, August 4). The artificial intelligence disclosure (AID) framework: An introduction. Cornell University arXiv. https://arxiv.org/abs/2408.01904

Weber, K. (2024, May 28). University of Sydney creates own genAI, Cogniti. ITNews.com. https://www.itnews.com.au/news/university-of-sydney-creates-own-genaicogniti-608334

Yee, K., Whittington, K., Doggette, E., & Uttich, L. (2023). ChatGPT assignments to use in your classroom today. In University of Central Florida (Ed.), UCF created OER works (No. 8). https://stars.library.ucf.edu/oer/8

No comments yet.

Leave a Reply

This site uses Akismet to reduce spam. Learn how your comment data is processed.

Powered by WordPress. Designed by WooThemes

Skip to toolbar