AI-Generated Content - Implications for Teaching and Learning

The ChatGPT AI Chatbot has been a media sensation since November 2022, when the company OpenAI released it publicly. ChatGPT is a web-based application backed by an artificial intelligence capable of generating output that is in many cases indistinguishable from content written by humans. In many cases the discussion around ChatGPT is broader than just ChatGPT itself. Many of the questions in this document can be applied to AI that is capable of generating human-like output in a variety of formats including images, videos, sounds, and the written word.

RIT’s Center for Teaching and Learning offers the following FAQs about how to reflect upon and adapt your current teaching approaches and practices in light of quickly developing AI capabilities. This guidance will likely change as these technologies, and how we address them, continue to evolve.

Instructors: please share your thoughts and ideas with us. Email examples of how you are integrating AI tools into your courses to


It is important to note that ChatGPT is one of many different types of AI capable of generating content, and the content is not necessarily limited to the written word. Below are some examples of AI-generating tools and their focus:

  • Claude: A tool similar to ChatGPT that is currently in closed beta
  • DALL-E: A tool capable of generating images from text prompts
  • Stable Diffusion: An open source program capable of generating images from text prompts; because it is open source, students are able to download and run this on their own machines
  • Bard: A Google product with similar capabilities to ChatGPT
  • Microsoft Bing: Microsoft, a major investor in OpenAI, has announced that they have added a tool similar to ChatGPT, and based on the same technology, to their Bing search engine. This tool is currently available only in the Microsoft Edge browser and allows the user to tweak the writing style by clicking a button.

As more AI tools become integrated in our lives, such as Chat GPT, it may help to critically think about their use and the impact of the technology before talking to your students.  While AI can advance our ability to collect and understand many topics, that convergence may come at a cost. To be better able to use this effectively, ethically and appropriately, you will want to think critically when evaluating what the technology can do well, what it does not do well, and what we are giving up / trading when we use it. You may also want to reflect on how a particular technology might impact our ability to innovate and create as a society. Faculty recognize the usefulness of a tool like ChatGPT, but also should guide students to be critically thoughtful users and consumers of its output. You may want to co-create a ‘‘rules of the road’ expectations for the possible integration of AI.

One of the most common access points is to talk to your students about AI-generated content on the first day of class during the syllabus review session. In this conversation it is important to clarify your expectations of the class with respect to AI-generated content. In the spirit of being curious and open-minded, you could expand the conversation along the following lines, many of which come from The Eberly Center:

  • Convey confidence in your student’s motivation and engagement
  • Ask students about their experience with AI tools generally
  • Talk about academic integrity early on and why it’s important
  • Be transparent about why ChatGPT or AI tools are concerning or exciting to you in the context of your course and discipline
  • Consider inviting your students to discuss and co-create a ChatGPT-inflected Academic Integrity statement for the syllabus

Other topics to discuss with students were raised at a faculty webinar sponsored by Liz Lawley, Matt Wright, Christopher Schwartz, and Nate Mathews. Broadly, these include:

  • Show students the types of things ChatGPT gets wrong/limitations as a means to caution it’s use directly (using it to brainstorm with revisions are up to your discretion)
  • Compare the surface-level bland writing that it does with what is expected of college-level writing
  • Discuss equity, ethics, expensive and environmentally unsustainable factors of ChatGPT
  • Contextualize its use or non-use for your domain
  • Explain why it’s important to do certain tasks in your class on your own without assistance so they can be “competent and independent practitioners”
  • Help students understand they WHY behind the learning of certain things via ChatGPT
  • Highlight the value of human skills and value of AI “skills”, and how these two can work together

Think about incorporating some hands-on approaches to the conversation. Some of the easiest things to demonstrate in real-time are the types of things that are often wrong when using the tool:

  • While looking over the output of a prompt you can point out the surface-level writing that ChatGPT creates
  • ChatGPT has difficulty with content that is deep and difficult to reason about including cause and effect relationships

One thing to remember when adopting the approach above is that these models will almost certainly evolve to address the depth of their writing, to handle cause and effect relationships better, and to possibly become better in specific domains. ChatGPT currently exhibits these weaknesses, but its capabilities will evolve and new tools will come along.

The known limitations of ChatGPT at the time of this writing include:

  • It gets things factually incorrect and is confidently written in that incorrectness. Students who don’t know what they don’t know will be given incorrect answers and not know they are incorrect
  • It frequently makes up imaginary citations
  • While it may appear as though it does, it cannot actually reason and therefore makes logic errors, in both mathematical and programming output
  • It has limitations in both the input and output; the approximate length it can process is 500 words 
  • The ChatGPT AI model is trained on a specific set of source data, and none of the data is more recent than 2021. It cannot search the current internet 
  • A small subset of potential ChatGPT failures are documented on the ChatGPT failure archive

Due to the way that AI pulls information and learns, there are several ways in which AI can reinforce or amplify bias and support assumptions that are incorrect. All AI is vulnerable to the bias exhibited in the training data used to create the model. ChatGPT is no different. If the tool pulls from limited sources for training data, or pulls from sources that exhibit similar assumptions, then the biased result will be reflected in the output.

Because of the way data is collected and confirmed (training data), within the broad category of bias is an especially problematic issue of equity and inclusion.  Teaching and learning strategies, even AI, have their benefits and potential problems, we recommend that you consider potential implications for equity and inclusion; here are just a few of the implications regarding ChatGPT and other AI tools:

Again, all AI is vulnerable to the
bias exhibited in the training data used to create the model. ChatGPT is no different. In the past there have been examples of bias found in:

  • AI-proctored exams that resulted in individuals of color being flagged more often than white individuals
  • Other chatbot interfaces, which resulted in biased responses
  • GPT-3, the AI technology underlying ChatGPT

Privacy Concerns:
There is also the potential for
privacy concerns with respect to student data. AI is trained on data and currently every prompt posed to ChatGPT is being used to train the model further. It is conceivable that a student could provide personal information to the AI and the AI could use that data as part of its training. Once trained, the AI could respond to another user in the future with this information or very similar information.

Additionally, ChatGPT may become commercialized as sponsoring organizations look towards monetization, and as such may be available only to individuals with the financial means to use the tool. This creates an access issue that you will want to be aware of. OpenAI, the company behind ChatGPT, currently offers a $20.00 subscription fee to the service and others are likely to follow suit. As of this writing, the freely available version of ChatGPT also suffers from periods of instability that cause the service to be unavailable for long periods of time. Microsoft has already incorporated aspects of ChatGPT in the Bing search engine.

Students with Disabilities:
Some recent articles in the higher education press have suggested that assigning in-class, hand-written or oral work is the most effective way to bolster academic integrity within the context of ChatGPT. However, as cautioned by the
Eberly Center and others, relying exclusively or excessively on many of the proposed low-tech, time-limited approaches may prevent non-native English speakers, deaf and hard of hearing learners, or students with disabilities requiring laptop access during class and other accommodations from RIT’s Disability Services Office from fully demonstrating their learning.

Numerous course design and teaching strategies have emerged in response to ChatGPT. Many of these strategies attempt to exploit current ChatGPT “limitations” in the following if-then ways:

  • Because ChatGPT is text-based, ask students to produce non-text work, like infographics, podcasts, video, drawing, diagram
  • Since ChatGPT cannot access materials protected by firewalls, ask students to make connections and references to discussions and materials posted to an LMS (myCourses)
  • Ask students to write an essay reflecting on specific experiences that have occurred in your class

Other similar but more sophisticated strategies, such as these from Michigan’s Center for Research on Learning & Teaching, include: 

  • Sequence major assignments to include project proposals/outlines, multiple drafts, annotated bibliographies
  • Specify the types of source materials students should use, including some that are very specific to the assignment, such as field specific journal articles that require authentication, data collection and analysis when relevant, or client assessment for field assignments
  • Ask students to engage in and submit a reflection about what they have learned from completing the assignment. Sample prompts include: a) Discuss the most challenging and most rewarding aspects of your project. b) What was the most surprising thing you learned in the course of this project? c) If you had the chance to do it again, what one thing would you have done differently on this project?
  • Have students work on peer editing and peer commentary as part of the evaluation/writing process, so that they have to comment and make suggestions and respond to other students' writing
  • Have students write and submit a Google Doc where you are added as an editor so that you can see the document history
  • Focus on research skills and the expression of original thought, rather than creating a synthesized document

A good starting point for any assignment incorporating AI-generated content would be for the instructor to experiment with the tool in an effort to understand the potential output. While ChatGPT can generate varying responses, this experimentation would give instructors a baseline for student generated ChatGPT responses. Instructors may also provide this response to students who have difficulty accessing ChatGPT, thus alleviating some of the equity issues.

ChatGPT can be included at different points during the assignment, these are a few possibilities:  

  • Students could use ChatGPT to generate a starting point that can be edited and improved
  • Students could be tasked with creating a robust prompt by asking ChatGPT questions, and then refining with additional questions
  • Students could compare and contrast the information provided through ChatGPT and a traditional literature review
  • The assignment could be divided into easily assessed and reviewed parts where ChatGPT can be used by the student where applicable
  • Students could be encouraged to identify what prompts were made and consider how these might be enhanced
  • Faculty have used ChatGPT to edit student work and then provide the responses from ChatGPT as well as a critique of those responses to the students

Many instructors have used AI to generate instructional materials including varied examples, multiple explanations, low stakes testing content, summarize student responses, and more. Mollick and Mollick provide example AI prompts for the material types above and more strategies in their paper Using AI to Implement Effective Teaching Strategies in Classrooms: Five Strategies, Including Prompts

As with all of the recommendations in this FAQ, it is important to scrutinize the output of the AI tool for accuracy prior to using the output.

The citation method for ChatGPT is currently a hotly debated topic, and is likely to evolve quickly. There are a few hints as to which direction these citations will likely take in the future:

  • The MLA has a blog post addressing the topic
  • The APA has tweeted about it’s evolving opinion on how to move forward
  • OpenAI has a recommendation on how to cite their content specifically
  • OpenAI also has a Terms of Use page that indicates in section 2 part C that users may not represent ChatGPT’s work as human-generated

In all cases, the student should include all material ChatGPT created, labeled as sourced from ChatGPT. Students should also verify the facts provided by ChatGPT, and those verified facts can also be cited, as ChatGPT will occasionally provide false information.

There is no easy answer to this question. There are several “AI detectors” that purport to tell you whether something has been generated by an artificial intelligence. It is important to note that these detectors are often only capable of predicting with some likelihood that something is AI-generated and are often wrong. Additionally, there will likely come a day when the models used to generate student content are of sufficient quality that their presence in a text or image will be undetectable. As such, it is important to not rely on these tools for the purposes of your class and instead focus on designing assignments that are tolerant of AI in the learning environment.

There are ways to adjust ChatGPT’s writing style  enough to fool a detector. Here again, as the AI tools improve, it is unlikely that these detectors will be effective on even the first pass.

Turnitin has released a feature advertised as having the ability to detect AI-generated content. RIT has a license for Turnitin and it is integrated with myCourses. In early testing, it has been found to be an unreliable indicator of AI-authored content and has several limitations. It is likely the tool will improve over time. However, so will AI-generated authoring and this arms race is likely to continue indefinitely. Despite the advertised capabilities of this technology, the CTL recommends pursuing the other recommendations within this document when delivering your course materials.

A few of the currently available detectors are below:

These links from RIT’s Academic Integrity website will provide you with tools and strategies for promoting student academic integrity but be aware that many aspects of our policies may be functionally unenforceable given the pace and complexity of AI.