How to Productively Address AI-Generated Text in Your Classroom 

How to Productively Address AI-Generated Text in Your Classroom 

Updated 1/25/2023

News about ChatGPT and its text-generating capabilities have been sweeping across higher education, raising questions about what AI-powered text generators mean to learning, writing, and academic integrity in our classes. There is potential that some students may misuse these AI tools, misrepresenting AI-generated text as their own, but there is also great potential for how this technology can transform teaching and learning. This page is intended to provide some ideas on how we can engage with this emerging technology in productive and pedagogically sound ways. 

We will continue to update this page over time, so if you have a specific resource, insight, or teaching idea to share, please let us know.  

What are ChatGPT and AI-generated text? 

The current set of AI text tools utilizes large language generator models, training artificial neural networks (algorithms designed to recognize patterns) on a large dataset of human conversations. One of the current models, GPT (Generative Pre-Trained Transformer) is trained for the task of conversational language modeling and is fine-tuned to generate more contextually relevant responses. A key element of this is the ability to predict what the next words and phrases are within a conversational context (similar to what Word and Google do on a smaller scale). Chat-GPT is the tool that has most recently hit the headlines, but there are many others evolving in this new field of interactive text generation.  

These generative AI tools are the latest in a progression of tools we’ve all become familiar with—from the automatic chatbots on customer service webpages to the editing and phrase-completion tools we see in Word and Google Docs. While the text-generation capabilities of these recent tools seem a huge leap over Word’s editorial suggestions, integration of these generative technologies is likely to continue growing.

How can you most productively address Chat-GPT and other AI text tools? 

As with any discussion about academic integrity, we want to address the very real and important question of where you want to put your time and energy. We understand the need to ensure academic integrity in your courses, but we also want to promote approaches that can help you focus less on policing student behavior so you can get back to teaching. How can you strike a balance that doesn’t make you hate teaching or view your students as adversaries?  

In this case, rather than focusing solely on a punitive approach of catching uses of AI-generated text—which is a challenging proposition with a tool that learns and evolves—we recommend putting your time and energy into approaches that can mitigate its inappropriate use while improving student learning. 

Why might students use AI-generated text? 

There are many reasons why students may turn to online tools, including text-generation tools. For some it is a way to start the research process and get help with narrowing down their topic areas, similar to how they might use other research tools. For others, it might be the novelty of using this new tool that is getting so much media attention. Some students, however, may use these AI tools as shortcuts to complete their work for them. In this case, it is important to consider why students may choose to outsource their work to an AI bot. They may find themselves needing to complete an assignment quickly because of poor time and workload management skills, they may feel pressure to receive a high grade, or they may not feel confident with their content knowledge or academic writing skills, among other reasons. Simply writing students off as lazy or dishonest misses valuable opportunities to structure assignments in ways that can encourage academic integrity and promote student learning and success. 

We address some ways below that you can revise specific assignments to promote academic integrity—additional structure, increased relevance, alternate assignment formats, etc.—but instructors can also help students avoid misuses of these tools by addressing broader issues. If you want students to help engage in the research process, for example, we highly recommend working with IU librarians to support students in that process. Or, if you are concerned about your students’ time management skills, we recommend you turn to the Student Academic Center (SAC) and their time management resources page. We understand that instructors are not directly responsible for student academic misconduct, but it is always better to address the challenges students face through educational support than to deal with disciplinary issues later. 

How can you identify AI-generated text? 

Much of the higher education discussion around Chat-GPT so far has been about how you can identify the text it generates. While we are trying not to make detection and punishment the focus of our efforts, here are a few things you should know. 

  • Text-matching isn’t very useful. The tech field is already filled with tools to identify possible sources of plagiarism by looking for matches between a submitted text and the company’s database of other essays, as well as what is available online; IU uses Turnitin. That matching is harder to do with a tool that generates (mostly) original text, and that can be told to quote and paraphrase. Turnitin says they have some capacity now to identify AI-generated text, but that relies on their Turnitin Originality tool, which requires another sample of a student’s writing for comparison. 
     
  • Available detection tools. There are already a few tools out there that claim to spot AI-generated text, including GPT-2 Output Detector and GPT Zero, and they do seem to offer some promise at spotting AI-generated language patterns. But there are plenty of social media reports that indicate widely varying degrees of detection success, and instructions for circumventing these tools are already emerging on Reddit and other sites. So, using such inconsistent (and likely defeatable) tools as the basis of an academic integrity strategy is problematic, and more so if they are the basis for an academic misconduct accusation. Time will tell if we get more accurate detection tools, but this might be difficult, since the AI generating the text will continue to evolve, too. 
     
    On a related note, OpenAI, the company the created ChatGPT, is developing a “digital watermark” for ChatGPT text—essentially patterns of language that would act as a fingerprint that could be detected. Others are skeptical of that being a long-term solution, however, as another AI could be used to paraphrase the ChatGPT output. And the cat-and-mouse game continues....  
     
  • Made up quotes.While AI-generated text can write impressive responses to general prompts, and it can even provide citations, some sources indicate it just makes up some quotations. Tom Zeller from Undark Magazine notes that when ChatGPT was asked to produce a journalistic story citing “quote from experts,” the bot both made up some of those quotes and even created “fictional composites” of various real people in the field. (And, interestingly, we plugged a portion of the story ChatGPT’s created into the GPT-2 Output Detector and received a “99.92% real” report, casting more doubt on that detection tool’s accuracy.) 
     
  • The bluster factor. ChatGPT’s developers at OpenAI admit that this is a current shortcoming of the tool, noting that “ChatGPT sometimes writes plausible-sounding but incorrect or nonsensical answers.” This isn’t always a clear giveaway, since some student writing has a “BS” component to it, as they will say, particularly when students are focusing on an imagined academic voice over substance. 
     
  • Look to your discipline. One of the best ways to understand the characteristics and limitations of ChatGPT is to look for conversations within your disciplines. You have colleagues who are plugging their own assignments into the tool and analyzing the outputs. They can tell you where ChatGPT shines in writing about your discipline and where it falls short. 

How can you address AI-generated content in your syllabus and course design? 

Most instructors already include statements in their syllabi about academic integrity, so extending those practices to overtly include Chat-GPT and other AI-generated text makes sense. Here are some suggestions:

  • Focus on positive aspects of academic integrity. Establishing trust and a sense of community can reduce cheating, so develop policies and practices that promote a positive learning community (see Rettinger, 2022). In fact, some instructors have students co-create course honor codes and discussion norms, and this would be an ideal situation for such collaborative practices. Ironically, threatening or punitive language in a syllabus can break down instructor-student relationships and make cheating psychologically easier for students. This doesn’t mean you cannot lay out consequences, but lead with the reasons for ethical behavior, and avoid legalistic or adversarial tones. See our Inclusive and Equitable Syllabus page for more information on the benefits of positive syllabus language. 
     
    If your course has opportunities to address ethics in the discipline, be ready to address those, too. What are the benefits of integrity in your field, and what are the implications of mis-representing their work?  
     
  • Focus on benefits of learning. Help students understand why you are giving them assignments, and emphasize how reliance on AI can hinder their learning (although there might be ways AI can help that process, too). Do you talk with your students about what you want to see in their writing, and do your assignments and grading practices reflect that? Students sometime cheat because they think an assignment is simply a mechanical performance for a grade, so the more you can show the benefits for learning—personal relevance, reflection on the process and their growth, feelings of accomplishment, etc.—the less likely they are to take those shortcuts.  
     
    Again, if you can make career connections within your course, address how reliance on AI-writing can hinder their development of writing skills that employers highly value. 
     
  • Specifically mention Chat-GPT and other AI tools. Not only should you be specific about your views of AI-generated or -assisted writing, but it doesn’t hurt if students are aware of your knowledge about Chat-GPT and other AI tools—what they are, how they can be used, some of their markers and characteristics, their limitations, etc. This can not only be a deterrent to its use, but it can also open up meaningful conversations about the role AI can play in your discipline. 
     
  • Reduce workload and focus on process to disincentivize cheating. A significant reason students cheat is because they feel overwhelmed, so consider if your course workload or assignment timing might be contributing to this pressure and working against your goals for student learning. Similarly, designing writing assignments that only include the one final paper can lead to procrastination and desperation, both big causes of cheating. Having students build towards that assignment in smaller chunks can reduce that procrastination anxiety..., and give you more reference points about their writing style along the way. 

How can you adjust assignments to make them more AI-resistant? 

Probably the best way to guard against inappropriate use of AI-generated text is to redesign your assignments, both the prompts themselves and the related processes. These options vary in their usefulness across contexts, but consider these ideas as starting points: 

  • Avoid simple fact-based questions. The first step with avoiding AI-generated responses is to avoid prompts with specific, factual answers. ChatGPT, however, still does a pretty good job with higher-order questions (analysis, synthesis, etc.), and it can be pretty creative (see the viral PBJ in a VCR example). So, aim for assignments calling for more complex cognitive skills, and then layer on some of the other techniques below.  
     
  • Use class-specific cases and examples. Tie writing prompts to unique or fictional cases or scenarios in your class, particularly if those cases build over time and draw on in-class activities or group work. Relying on in-class activities as a basis for assignments leaves AI without necessary information, and feeding it all that information would be time-consuming for students. If you use this approach, be ready to have an alternate assignment ready for students who cannot come to class for medical or other legitimate reasons. 
     
  • Break large assignments into smaller stages. Giving an assignment in one big chunk can add pressures that sometimes drive students to cheat, while breaking an assignment into smaller pieces can improve learning and writing skills while mitigating these pressures and reliance on AI. Consider breaking larger assignments into multiple stages, giving feedback and grades on each one, and perhaps incorporating peer feedback. This helps in multiple ways: 1) It mitigates the pressure to cheat that emerges from procrastination and feeling lost on a big, high-stakes assignment; 2) It gives you some sense of students’ writing styles along the way, especially if in-class writing is added to the mix; and 3) It leads to better learning and writing in general. 
     
  • Mix in some in-class writing. This can be anywhere in the writing process—early idea-development stages, syntheses of in-class activities that will be incorporated into the project, or reflections on their work and process. Aside from being a valuable approach to teaching writing in your discipline, in-class writing can provide a baseline of a student’s style that can be used to identify writing that isn’t the student’s original work. Those of us who have dealt with plagiarism before know that those students rarely know the submitted work well, nor are they able to talk about their writing process. But try to use in-class writing for learning opportunities, not just a policing tool. 
     
  • Ask for personal connections and examples. Sure, students can ask ChatGPT to do this—there are already examples of AI generating passable "personal" college admission essays—but adding a personal element like this might reduce the likelihood that students will turn to AI, especially if you have also built personal relationships with them. In general, anonymity makes cheating easier, both functionally and psychologically. 
     
  • Use very recent sources. ChatGPT currently draws on a database that only goes through September 2021 (it admitted as such in a 1/15/2023 response about the latest COVID variant), and it would be unable to reference pre-print scholarship that isn’t yet online. A student could feasibly feed this into the AI, but this generally is a valuable approach, especially when layered with other strategies. 
     
  • Use images in prompts. For now, ChatGPT only takes text input, so having students respond to a unique image or diagram makes the AI unable to answer the question. ChatGPT has solved some pretty good application/diagnostic questions in biochemistry, for example, but it cannot view and interpret an image of a chemical reaction or cell. Further, asking for student responses to be in the form of a diagram or image can take AI text-generators out of the mix. Be aware that using images might be a problem for students with vision impairments, so be ready to provide an alternative assignment for them. 
     
  • Utilize alternative assignments. Consider other ways that students could demonstrate their knowledge and mastery of learning outcomes, including “performative tasks.” This is a useful practice in general, aligning well with precepts of Universal Design for Learning, but it also avoids AI text generators altogether, or at least relegates them to a supporting role. Could students develop a video, podcast, drawing, or vlog (video blog) to demonstrate their knowledge? Remember that ChatGPT can be very creative in text, so asking for alternate outputs is the key here. 
     
  • Run your own assignments through ChatGPT. Curious how well AI answers your assignments? Try running them through yourself to see both how well it does and what markers you see of its work. If it provides solid answers, you might want to keep working on the assignment. 

How can you embrace the AI tools for improving student learning? 

Most of our comments so far have focused on mitigating or disrupting the use of ChatGPT and other generative AI tools. But since these tools will only proliferate and will be a part of our students’ educational and professional careers, consider ways of actively addressing and embracing them as part of your work with students.* Here are some suggestions: 

  • Have students analyze AI-generated text. Have students use AI to generate a response to a key question in your class or field, and then have them analyze it for accuracy and quality. Their ability to spot flaws in the AI’s output can be a valuable learning experience and indicator of their knowledge, often forcing them to understand subtleties in a concept or argument, propose better solutions to a problem, or find better sources or citations.
     
  • Use AI for generating early drafts or starting points. AI can be used for the invention stage of writing, helping students get started. Using Track Changes in Microsoft Word can be used to show how they have revised and improved that text. 
     
  • Use AI as a heuristic. One of the interesting aspects of ChatGPT is the ability to continually refine a question. Consider having students utilize AI as a heuristic tool, helping them learn how to ask iterative questions in order to refine a response. Sometimes knowing how to frame a question can demonstrate understanding of complex task. 
     
  • Engage students in understanding how AI can shape their future work. Having discussions about the evolving power of AI within your discipline can be a valuable career tool. What are the types of things AI tools like ChatGPT or DALL-E (Open AI's image generator) can do for professionals in your field, and what are the potential pitfalls of using or relying on such tools? Matching such discussions with hands-on practice with the tools can be a powerful professional development practice. 

* Note: ChatGPT is currently available for free, but it is expected to eventually go behind a paywall, so be aware of costs associated with incorporating this or other generative AI tools into your teaching. There may always be new tools becoming available for free, but that might take some work to keep up with. Further, access to paid tools may be an equity issue worth considering. 

 

While none of these approaches will completely prevent students from using AI in unauthorized ways in your classes, we hope that these suggestions improve learning for your students while promoting academic integrity. We recognize that there will always be instances of academic misconduct that you need to address—be aware of your department’s policies and procedures there—but focusing on ways to improve learning and disincentivizing cheating is better in the long run for both your students and yourself. 

As always, you can reach out to the CITL for assistance with course and assignment design, and we welcome any sample assignments or activities you are willing to share. 

Additional Resources 

Alby, Cynthia. “ChatGPT: A Must-See Before the Semester Begins.” Faculty Focus. January 9, 2023. 

ChatGPT and AI in Teaching and Learning: Opportunities and Challenges. Webinar recording from January 18, 2023 faculty panel.

Lang, James. Cheating Lessons: Learning from Academic Dishonesty. 2013. (IU instructors and staff can also access a video of Lang’s 2020 SoTL talk on academic integrity.) 

McMurtrie, Beth. “AI and the Future of Academic Writing.” Chronicle of Higher Education. December 13, 2022. [IU off-campus proxy link here] 

Novotney, Amy. “Beat the Cheat.” APA Monitor on Psychology 42.6. June, 2011.  

Rettinger, David. “Show Students You Care About Their Learning—They May Cheat Less.” The Faculty Lounge (Harvard Business Publishing). May 3, 2022.