How to Productively Address AI-Generated Text in Your Classroom 

How to Productively Address AI-Generated Text in Your Classroom 

Updated 3/4/2024
See our latest update belowa video on how instructors can talk to students about suspected cases of AI misuse.

News about ChatGPT and its text-generating capabilities have been sweeping across higher education, raising questions about what AI-powered text generators mean to learning, writing, and academic integrity in our classes. There is potential that some students may misuse these AI tools, misrepresenting AI-generated text as their own, but there is also great potential for how this technology can transform teaching and learning. This page is intended to provide some ideas on how we can engage with this emerging technology in productive and pedagogically sound ways. 

We will continue to update this page over time, so if you have a specific resource, insight, or teaching idea to share, please let us know.  

What are ChatGPT and AI-generated text? 

The current set of AI text tools utilizes large language generator models, training artificial neural networks (algorithms designed to recognize patterns) on a large dataset of human conversations. One of the current models, GPT (Generative Pre-Trained Transformer) is trained for the task of conversational language modeling and is fine-tuned to generate more contextually relevant responses. A key element of this is the ability to predict what the next words and phrases are within a conversational context (similar to what Word and Google do on a smaller scale). Chat-GPT is the tool that has most recently hit the headlines, but there are many others evolving—such as Google's Gemini (formerly Bard) and Microsoft's Copilot—which will quickly advance this new field of interactive text generation.  

These generative AI tools are the latest in a progression of tools we’ve all become familiar with—from the automatic chatbots on customer service webpages to the editing and phrase-completion tools we see in Word and Google Docs. While the text-generation capabilities of these recent tools seem a huge leap over Word’s editorial suggestions, integration of these generative technologies is likely to continue growing.

How can you most productively address Chat-GPT and other AI text tools? 

As with any discussion about academic integrity, we want to address the very real and important question of where you want to put your time and energy. We understand the need to ensure academic integrity in your courses, but we also want to promote approaches that can help you focus less on policing student behavior so you can get back to teaching. How can you keep learning at the core of your interactions with students, allowing you to strike a balance that doesn’t make you hate teaching or view your students as adversaries?  

In this case, rather than focusing solely on a punitive approach of catching uses of AI-generated text—which is a challenging proposition with a tool that learns and evolves—we recommend putting your time and energy into approaches that can mitigate its inappropriate use while improving student learning. 

Why might students use AI-generated text? 

There are many reasons why students may turn to online tools, including text-generation tools. For some it is a way to start the research process and get help with narrowing down their topic areas, similar to how they might use other research tools. For others, it might be the novelty of using this new tool that is getting so much media attention. Some students, however, may use these AI tools as shortcuts to complete their work for them. In this case, it is important to consider why students may choose to outsource their work to an AI bot. They may find themselves needing to complete an assignment quickly because of poor time and workload management skills, they may feel pressure to receive a high grade, or they may not feel confident with their content knowledge or academic writing skills, among other reasons. Simply writing students off as lazy or dishonest misses valuable opportunities to structure assignments in ways that can encourage academic integrity and promote student learning and success. 

We address some ways below that you can revise specific assignments to promote academic integrity—additional structure, increased relevance, alternate assignment formats, etc.—but instructors can also help students avoid misuses of these tools by addressing broader issues. If you want students to help engage in the research process, for example, we highly recommend working with IU librarians to support students in that process. Or, if you are concerned about your students’ time management skills, we recommend you turn to the Student Academic Center (SAC) and their time management resources page. We understand that instructors are not directly responsible for student academic misconduct, but it is always better to address the challenges students face through educational support than to deal with disciplinary issues later. 

How can you identify AI-generated text? 

Much of the early discussion around generative AI so far has been about how you can identify the text it generates. While we are trying not to make detection and punishment the focus of our efforts, here are a few things you should know. 

  • Text-matching isn’t very useful. Many of the plagiarism detection tools we have, such asTurnitin, rely on text-matching between a submitted student assignment  and the company’s database of other essays, as well as what is available online. That matching is not feasible with AI tools that generate novel text with each prompt, and that can be told to quote and paraphrase. 
     
  • Available detection tools. There are already a few tools out there that claim to spot AI-generated text, including GPT-2 Output Detector and GPT Zero, and they do seem to offer some ability to spot AI-generated language patterns. But current reports indicate widely varying degrees of detection success, including a significant number of false positives, especially with text generated by non-native speakers of English. Further, there is a growing body of online tools designed to make AI-generated text undetectable. So, using such inconsistent (and likely defeatable) detection tools as the basis of an academic integrity strategy is problematic, and more so if they are the basis for an academic misconduct claim.

    Note that Turnitin has released their own AI detection product, but at the current time, IU's implementation of Turnitin does not include this new tool. Before including this tool in our suite of supported products, IU is examining its accuracy and efficacy, as well as how it handles student data. Further, current university guidelines indicate instructors should not upload student work to ChatGPT or AI-detection tools, due to privacy and intellectual property concerns; we have no control over how they use submitted work.
     
  • Look to your discipline. One of the best ways to understand the characteristics and limitations of ChatGPT is to look for conversations within your disciplines that might identify the kinds of patterns or errors AI makes about your field. And consider entering your assignments into ChatGPT to see what kinds of flags or indicators of AI-tenerated content emerge.

  • Know your students' writing. Knowing your students' writing stylesfrom smaller assignments or in-class writing taskscan help you identify when a submission seems to be different from what they usually submit. That is not proof of AI use, but it is a starting point to having a conversation with the student about their work. A caveat: GPT-4 is improving in matching the style of a provided sample.

  • Focus on process and not just product: This is a well-known approach to writing instructorsa focus on drafts and the research/writing process that can both lead students to producing better work and lesson the likelihood they will turn to.


How can you address AI-generated content in your syllabus and course design?

Most instructors already include statements in their syllabi about academic integrity, so extending those practices to overtly include Chat-GPT and other AI-generated text makes sense. Here are some suggestions:

  • Focus on positive aspects of academic integrity. Establishing trust and a sense of community can reduce cheating, so develop policies and practices that promote a positive learning community (see Rettinger, 2022). In fact, some instructors have students co-create course honor codes and discussion norms, and this would be an ideal situation for such collaborative practices. Ironically, threatening or punitive language in a syllabus can break down instructor-student relationships and make cheating psychologically easier for students. This doesn’t mean you cannot lay out consequences, but lead with the reasons for ethical behavior, and avoid legalistic or adversarial tones. See our Inclusive and Equitable Syllabus page for more information on the benefits of positive syllabus language. 
     
    If your course has opportunities to address ethics in the discipline, be ready to address those, too. What are the benefits of integrity in your field, and what are the implications of mis-representing their work?  
     
  • Focus on benefits of learning. Help students understand why you are giving them assignments, and emphasize how reliance on AI can hinder their learning (although there might be ways AI can help that process, too). Do you talk with your students about what you want to see in their writing, and do your assignments and grading practices reflect that? Students sometime cheat because they think an assignment is simply a mechanical performance for a grade, so the more you can show the benefits for learning—personal relevance, reflection on the process and their growth, feelings of accomplishment, etc.—the less likely they are to take those shortcuts. For a model for clarifying an assignment's purpose, see our page on the TILT model of assignment design.
     
    If you can make career connections within your course, address how reliance on AI-writing can hinder their development of writing skills that employers highly value. 
     
  • Have clear course policies about whether and how students may use AI tools in your class. Have a clear statement in your syllabus about your expectations around generative AI, and be ready to discuss it frequently throughout the semester. Make sure your policies explain the reasons for your decisions, so students can see that the policies reinforce your learning goals for them and aren't arbitrary rules. For sample policies, see Classroom Policies for AI Generative Tools, a crowdsourced resource collected by Lance Eaton.

    If you do want to allow some uses of generative AI, be clear about how students can use it and how they shouldn't. For example, Justin Hodgson from IUB's English Department provides clear guidelines for an "ethics of practice" within his courses. See his guidelines in his broader Ethics of Practice page; see also the recording from his August 15, 2023 webinar.

  • Specifically discuss Chat-GPT and other AI tools. In addition to discussing your course policies around generative AI, be ready to talk with students more generally about these tools—how well you see them engage with your disciplinary content, what impact they might have on how students learn in your course, and how they might impact students' future careers. Showing students you are savvy about generative AI can both be a deterrent to its use and open up opportunities for meaningful conversations with students.

  • Reduce workload and focus on process to disincentivize cheating. A significant reason students cheat is because they feel overwhelmed, so consider if your course workload or assignment timing might be contributing to this pressure and working against your goals for student learning. Similarly, designing writing assignments that only include the one final paper can lead to procrastination and desperation, both big causes of cheating. Having students build towards that assignment in smaller chunks can reduce that procrastination anxiety..., and give you more reference points about their writing style along the way. 

How can you adjust assignments to make them more AI-resistant? 

Probably the best way to guard against inappropriate use of AI-generated text is to redesign your assignments, both the prompts themselves and the related processes. Success in these efforts will depend on a combination of two factors: 1) increasing student motivation by making the assignments relevant and engaging, and 2) making assignments more resistant to AI use through some of the suggestions below. These options vary in their usefulness across contexts, but consider these ideas as starting points:

  • Avoid simple fact-based questions. The first step with avoiding AI-generated responses is to avoid prompts with specific, factual answers. ChatGPT, however, still does a pretty good job with higher-order questions (analysis, synthesis, etc.), and it can be pretty creative (see the viral PBJ in a VCR example). So, aim for assignments calling for more complex cognitive skills, and then layer on some of the other techniques below.  
     
  • Use class-specific cases and examples. Tie writing prompts to unique or fictional cases or scenarios in your class, particularly if those cases build over time and draw on in-class activities or group work. Relying on in-class activities as a basis for assignments leaves AI without necessary information, and feeding it all that information would be time-consuming for students. If you use this approach, be ready to have an alternate assignment ready for students who cannot come to class for medical or other legitimate reasons. 
     
  • Break large assignments into smaller stages. Giving an assignment in one big chunk can add pressures that sometimes drive students to cheat, while breaking an assignment into smaller pieces can improve learning and writing skills while mitigating these pressures and reliance on AI. Consider breaking larger assignments into multiple stages, giving feedback and grades on each one, and perhaps incorporating peer feedback. This helps in multiple ways: 1) It mitigates the pressure to cheat that emerges from procrastination and feeling lost on a big, high-stakes assignment; 2) It gives you some sense of students’ writing styles along the way, especially if in-class writing is added to the mix; and 3) It leads to better learning and writing in general. 

  • Ask for meta- or reflective statements. Having students reflect on their writing process is always a good practice—asking them what their process was like, what parts of the project was most interesting or meaningful to them, what challenges they experienced, or where they'd go next with their learning. The approach can also boost personal engagement with assignments and discourage use of generative AI. And if you allow use of AI, this is an opportunity for students to reflect on how they used it and how it fit into the rest of their process.
     
  • Mix in some in-class writing. This can be anywhere in the writing process—early idea-development stages, syntheses of in-class activities that will be incorporated into the project, or reflections on their work and process. Aside from being a valuable approach to teaching writing in your discipline, in-class writing can provide a baseline of a student’s style that can be used to identify writing that isn’t the student’s original work. Those of us who have dealt with plagiarism before know that those students rarely know the submitted work well, nor are they able to talk about their writing process. But try to use in-class writing for learning opportunities, not just a policing tool. 
     
  • Ask for personal connections and examples. Sure, students can ask ChatGPT to do this—there are numerous examples of AI generating passable "personal" college admission essays—but adding a personal element like this might reduce the likelihood that students will turn to AI, especially if you have also built personal relationships with them. In general, anonymity makes cheating easier, both functionally and psychologically. 
     
  • Use sources AI cannot access. Since AI tools have overcome their original limitation of not being able to access the live web, consider finding other ways to take advantage of AI's limited reach. Could you utilize pre-print scholarship, unpublished poems, guest speakers, recorded interviews with experts, or items behind paywalls? We've seen assignments that ChatGPT seems to address well until they include phrases like, "Using evidence from the video we viewed in class..." or "Based upon the case described during the guest lecture on February 21st ...." A student could feasibly feed this information into the AI, but relying on inaccessible sources can be useful, especially when layered with other strategies. 

  • Use images in prompts. For now, ChatGPT only takes text input, so having students respond to a unique image or diagram makes the AI unable to answer the question. ChatGPT has solved some pretty good application/diagnostic questions in biochemistry, for example, but it cannot currently view and interpret an image of a chemical reaction or cell. Further, asking for student responses to be in the form of a diagram or image can take AI text-generators out of the mix. The emerging GPT-4 tool can detect some images, but it cannot yet analyze charts or diagrams. Be aware that using images might be a problem for students with vision impairments, so be ready to provide an alternative assignment for them. 
     
  • Utilize alternative assignments. Consider other ways that students could demonstrate their knowledge and mastery of learning outcomes, including “performative tasks.” This is a useful practice in general, aligning well with precepts of Universal Design for Learning, but it also avoids AI text generators altogether, or at least relegates them to a supporting role. Could students develop a video, podcast, drawing, or vlog (video blog) to demonstrate their knowledge? Remember that ChatGPT can be very creative in text, so asking for alternate outputs is the key here. 
     
  • Run your own assignments through ChatGPT. Curious how well AI answers your assignments? Try running them through yourself to see both how well it does and what markers you see of its work. If it provides solid answers, you might want to keep working on the assignment. 

How can you talk to students about potential misuse of generative AI?

Long before generative AI, instructors have had to address potential student plagiarism or other forms of academic misconduct on assignments. And while this sometimes had a paper trail that included clear matches to online or print materials, more often the instructor would have more general evidence that the writing did not sound like the student's voice, was significantly different than their prior work, or didn't match their sophistication with language or the content. Talking to students about such suspicions is a delicate task, where outright accusations can lead to confrontational situations rather than educational ones. So, how can instructors firmly uphold academic integrity while keeping student learning and growth at the heart of the discussion? Please see the video below for suggestions from Dr. Miranda Rodak, the Director of IU's Mosaic Initiative and Senior Visiting Lecturer in IUB's Kelley School of Business.

 

How can you embrace the AI tools for improving student learning?

Most of our comments so far have focused on mitigating or disrupting the use of ChatGPT and other generative AI tools. But since these tools will only proliferate and will be a part of our students’ educational and professional careers, consider ways of actively addressing and embracing them as part of your work with students.* Here are some suggestions: 

  • Have students analyze AI-generated text. Have students use AI to generate a response to a key question in your class or field, and then have them analyze it for accuracy and quality. Their ability to spot flaws in the AI’s output can be a valuable learning experience and indicator of their knowledge, often forcing them to understand subtleties in a concept or argument, propose better solutions to a problem, or find better sources or citations.
     
  • Use AI for generating early drafts or starting points. AI can be used for the invention stage of writing, helping students get started. Using Track Changes in Microsoft Word can be used to show how they have revised and improved that text. 
     
  • Use AI as a heuristic. One of the interesting aspects of ChatGPT is the ability to continually refine a question. Consider having students utilize AI as a heuristic tool, helping them learn how to ask iterative questions in order to refine a response. Sometimes knowing how to frame a question can demonstrate understanding of complex task, and "prompt engineering" skills are becoming valuable for those wanting to explore issues with AI.
     
  • Engage students in understanding how AI can shape their future work. Having discussions about the evolving power of AI within your discipline can be a valuable career tool. What are the types of things generative AI tools can do for professionals in your field, and what are the potential pitfalls of using or relying on such tools? Matching such discussions with hands-on practice with the tools can be a powerful professional development practice. 

* Note: ChatGPT is currently available for free, but access to the most recent version (GPT-4) costs $20/month, so be aware of costs associated with incorporating this or other generative AI tools into your teaching, and the potential inequities between students who can and cannot afford the improved version. Also note the concerns the university raises about requiring students to use a tool it has not vetted for security and privacy issues.

 

While none of these approaches will completely prevent students from using AI in unauthorized ways in your classes, we hope that these suggestions improve learning for your students while promoting academic integrity. We recognize that there will always be instances of academic misconduct that you need to address—be aware of your department’s policies and procedures there—but focusing on ways to improve learning and disincentivizing cheating is better in the long run for both your students and yourself. 

As always, you can reach out to the CITL for assistance with course and assignment design, and we welcome any sample assignments or activities you are willing to share. We are also available to talk with departments about uses of AI in teaching and learning.

 

Additional Resources 

Alby, Cynthia. “ChatGPT: A Must-See Before the Semester Begins.” Faculty Focus. January 9, 2023. 

ChatGPT and AI in Teaching and Learning: Opportunities and Challenges. Webinar recording from January 18, 2023 faculty panel.

ChatGPT, Generative AI, & Syllabi An Ethics of Practice. Webinar recording from August 15, 2023.

Developing AI Course Policies and Addressing Academic Integrity Violations. Webinar recording from the August 11 presentation in the IUB College of Arts and Sciences.

Lang, James. Cheating Lessons: Learning from Academic Dishonesty. 2013. (IU instructors and staff can also access a video of Lang’s 2020 SoTL talk on academic integrity.) 

McMurtrie, Beth. “AI and the Future of Academic Writing.” Chronicle of Higher Education. December 13, 2022. [IU off-campus proxy link here] 

Novotney, Amy. “Beat the Cheat.” APA Monitor on Psychology 42.6. June, 2011.  

Rettinger, David. “Show Students You Care About Their Learning—They May Cheat Less.” The Faculty Lounge (Harvard Business Publishing). May 3, 2022.