AI-Generated Content

Generative AI has been a commonly discussed topic in academic and business conversations since November 2022. Generative AI is a category of tools backed by an artificial intelligence capable of generating output that is in many cases indistinguishable from content created by humans. The tools generally don't require any specialized knowledge in order to use them. There are many tools available capable of generating human-like output in a variety of formats including images, videos, sounds, and the written word.

RIT’s Center for Teaching and Learning offers the following FAQs about how to reflect upon and adapt your current teaching approaches and practices in light of quickly developing generative AI capabilities. This guidance will likely change as these technologies, and how we address them, continue to evolve.

Instructors: please share your thoughts and ideas with us. Email examples of how you are integrating generative AI tools into your courses to


There are many types of output you can get from generative AI tools, including text, images, audio, slides, and code. New tools are being developed all the time. Below are some examples of AI-generating tools and their focus:

  • ChatGPT: Generates text from text prompts or image prompts (depending on the version).
  • Claude: Generates text from text prompts.
  • DALL-E: Generates images from text prompts. 
  • Stable Diffusion: Generates images from text prompts.
  • Bard: Generates text from text prompts or image prompts.
  • Microsoft Bing: Generates text or images from text prompts or image prompts. It is limited to use within the Bing search engine in the Microsoft Edge browser.
  • GitHub Copilot: Generates code while writing code in popular code editors

As more AI tools become integrated in our lives, it may help to critically think about their use and the impact of the technology before talking to your students. While generative AI can advance our ability to collect and understand many topics, that convergence may come at a cost. To be better able to use this effectively, ethically, and appropriately, you will want to think critically when evaluating what the technology can do well, what it does not do well, and what we are giving up / trading when we use it. You may also want to reflect on how a particular technology might impact our ability to innovate and create as a society. Faculty recognize the usefulness of generative AI tools, but should also guide students to be critically thoughtful users and consumers of its output. You may want to co-create a ‘‘rules of the road’ expectations for the possible integration of generative AI.

One of the most common access points is to talk to your students about AI-generated content on the first day of class during the syllabus review session. In this conversation it is important to clarify your expectations of the class with respect to AI-generated content. In the spirit of being curious and open-minded, you could expand the conversation along the following lines, many of which come from The Eberly Center:

  • Convey confidence in your student’s motivation and engagement
  • Ask students about their experience with AI tools generally
  • Talk about academic integrity early on and why it’s important
  • Be transparent about why generative AI tools are concerning or exciting to you in the context of your course and discipline
  • Consider inviting your students to discuss and co-create an Academic Integrity statement for the syllabus that addresses expectations around the use of generative AI in the course

Other topics to discuss with students were raised at a faculty webinar sponsored by Liz Lawley, Matt Wright, Christopher Schwartz, and Nate Mathews. Broadly, these include:

  • Show students the types of things generative AI gets wrong/limitations as a means to caution it’s use directly (using it to brainstorm with revisions are up to your discretion)
  • Discuss equity, ethics, expensive and environmentally unsustainable factors of generative AI
  • Contextualize its use or non-use for your domain
  • Explain why it’s important to do certain tasks in your class on your own without assistance so they can be “competent and independent practitioners”
  • Help students understand the WHY behind the learning of course concepts
  • Highlight the value of human skills and value of AI “skills”, and how these two can work together

Due to the way that generative AI pulls information and learns, there are several ways in which AI can reinforce or amplify bias and support assumptions that are incorrect. All AI is vulnerable to the bias exhibited in the training data used to create the model. If the tool pulls from limited sources for training data, or pulls from sources that exhibit similar assumptions, then the biased result will be reflected in the output.

Because of the way data is collected and confirmed (training data), within the broad category of bias is an especially problematic issue of equity and inclusion. Teaching and learning strategies, even AI, have their benefits and potential problems, we recommend that you consider potential implications for equity and inclusion; here are just a few of the implications regarding generative AI tools:

Again, all AI is vulnerable to the bias exhibited in the training data used to create the model. In the past, there have been examples of bias found in other chatbot interfaces, which resulted in biased responses.

Privacy Concerns:
There is also the potential for privacy concerns with respect to student data. Generative AI is trained on data and for some tools, the prompts you use are being used to train the model further. It is conceivable that a student could provide personal information to the AI and the AI could use that data as part of its training. Once trained, the AI could respond to another user in the future with this information or very similar information.


Some tools are currently in free public beta. As sponsoring organizations look towards monetization, tools may be available only to individuals with the financial means to use the tool. This creates an access issue that you will want to be aware of.

Students with Disabilities:
Some articles in the higher education press have suggested that assigning in-class, hand-written or oral work is the most effective way to bolster academic integrity within the context of ChatGPT. However, as cautioned by the Eberly Center and others, relying exclusively or excessively on many of the proposed low-tech, time-limited approaches may prevent non-native English speakers, deaf and hard of hearing learners, or students with disabilities requiring laptop access during class and other accommodations from RIT’s Disability Services Office from fully demonstrating their learning.

Numerous course design and teaching strategies have emerged in response to generative AI. Some strategies from Michigan’s Center for Research on Learning & Teaching, include: 

  • Sequence major assignments to include project proposals/outlines, multiple drafts, annotated bibliographies
  • Specify the types of source materials students should use, including some that are very specific to the assignment, such as field specific journal articles that require authentication, data collection and analysis when relevant, or client assessment for field assignments
  • Ask students to engage in and submit a reflection about what they have learned from completing the assignment. Sample prompts include: a) Discuss the most challenging and most rewarding aspects of your project. b) What was the most surprising thing you learned in the course of this project? c) If you had the chance to do it again, what one thing would you have done differently on this project?
  • Have students work on peer editing and peer commentary as part of the evaluation/writing process, so that they have to comment and make suggestions and respond to other students' writing
  • Have students write and submit a Google Doc or Office 365 Word document where you are added as an editor so that you can see the document history
  • Focus on research skills and the expression of original thought, rather than creating a synthesized document

A good starting point for any assignment incorporating AI-generated content would be for the instructor to experiment with the tool in an effort to understand the potential output.

The use of generative AI can be included at different points during the assignment, these are a few possibilities:  

  • Students could use tools to generate a starting point that can be edited and improved
  • Students could be tasked with creating a robust prompt by asking the tool questions, and then refining with additional questions
  • Students could compare and contrast the information provided through the tool and a traditional literature review
  • The assignment could be divided into easily assessed and reviewed parts where the tool can be used by the student where applicable
  • Students could be encouraged to identify what prompts were made and consider how these might be enhanced

Many instructors have used AI to generate instructional materials including varied examples, multiple explanations, low stakes testing content, summarize student responses, and more. Mollick and Mollick provide example AI prompts for the material types above and more strategies in their paper Using AI to Implement Effective Teaching Strategies in Classrooms: Five Strategies, Including Prompts

As with all of the recommendations in this FAQ, it is important to scrutinize the output of the generative AI tool for accuracy prior to using the output.

The citation method for AI-generated content is still being standardized, however some of the core style guides have their perspectives on this topic.

In all cases, the student should be prepared to show all materials that the tool created, labeled as such. Students should also verify the facts provided by the generative AI tool, and those verified facts can also be cited, as these types of tools will occasionally provide false information.

There is no easy answer to this question. There are several “AI detectors” that purport to tell you whether something has been generated by an artificial intelligence. It is important to note that these detectors are often only capable of predicting with some likelihood that something is AI-generated and are often wrong. Additionally, there will likely come a day when the models used to generate student content are of sufficient quality that their presence in a text or image will be undetectable. As such, it is important to not rely on these tools for the purposes of your class and instead focus on designing assignments that are tolerant of AI in the learning environment.

Turnitin has released a feature advertised as having the ability to detect AI-generated content. RIT has a license for Turnitin and it is integrated with myCourses. In testing, it has been found to be an unreliable indicator of AI-authored content and has several limitations. It is likely the tool will improve over time. However, so will AI-generated authoring and this arms race is likely to continue indefinitely. Despite the advertised capabilities of this technology, the CTL recommends pursuing the other recommendations within this document when delivering your course materials.

These links to RIT websites will provide you with tools and strategies for promoting student academic integrity but be aware that many aspects of our policies may be functionally unenforceable given the pace and complexity of generative AI.

Please contact the Center for Teaching and Learning for a consultation on course specific assignments.

Last Updated: 7/26/2023