Generative AI has been a commonly discussed topic in academic and business conversations since November 2022. Generative AI is a category of tools backed by an artificial intelligence capable of generating output that is in many cases indistinguishable from content created by humans. The tools generally don't require any specialized knowledge in order to use them. There are many tools available capable of generating human-like output in a variety of formats including images, videos, sounds, and the written word.
RIT’s Center for Teaching and Learning offers the following FAQs about how to reflect upon and adapt your current teaching approaches and practices in light of quickly developing generative AI capabilities. This guidance will likely change as these technologies, and how we address them, continue to evolve.
Instructors: please share your thoughts and ideas with us. Email examples of how you are integrating generative AI tools into your courses to firstname.lastname@example.org.
There are many types of output you can get from generative AI tools, including text, images, audio, slides, and code. New tools are being developed all the time. Below are some examples of AI-generating tools and their focus:
ChatGPT: Generates text from text prompts or image prompts (depending on the version).
As more AI tools become integrated in our lives, it may help to critically think about their use and the impact of the technology before talking to your students. While generative AI can advance our ability to collect and understand many topics, that convergence may come at a cost. To be better able to use this effectively, ethically, and appropriately, you will want to think critically when evaluating what the technology can do well, what it does not do well, and what we are giving up / trading when we use it. You may also want to reflect on how a particular technology might impact our ability to innovate and create as a society. Faculty recognize the usefulness of generative AI tools, but should also guide students to be critically thoughtful users and consumers of its output. You may want to co-create a ‘‘rules of the road’ expectations for the possible integration of generative AI.
One of the most common access points is to talk to your students about AI-generated content on the first day of class during the syllabus review session. In this conversation it is important to clarify your expectations of the class with respect to AI-generated content. In the spirit of being curious and open-minded, you could expand the conversation along the following lines, many of which come from The Eberly Center:
Convey confidence in your student’s motivation and engagement
Ask students about their experience with AI tools generally
Due to the way that generative AI pulls information and learns, there are several ways in which AI can reinforce or amplify bias and support assumptions that are incorrect. All AI is vulnerable to the bias exhibited in the training data used to create the model. If the tool pulls from limited sources for training data, or pulls from sources that exhibit similar assumptions, then the biased result will be reflected in the output.
Because of the way data is collected and confirmed (training data), within the broad category of bias is an especially problematic issue of equity and inclusion. Teaching and learning strategies, even AI, have their benefits and potential problems, we recommend that you consider potential implications for equity and inclusion; here are just a few of the implications regarding generative AI tools:
There is also the potential for privacy concerns with respect to student data. Generative AI is trained on data and for some tools, the prompts you use are being used to train the model further. It is conceivable that a student could provide personal information to the AI and the AI could use that data as part of its training. Once trained, the AI could respond to another user in the future with this information or very similar information.
Some tools are currently in free public beta. As sponsoring organizations look towards monetization, tools may be available only to individuals with the financial means to use the tool. This creates an access issue that you will want to be aware of.
Students with Disabilities:
Some articles in the higher education press have suggested that assigning in-class, hand-written or oral work is the most effective way to bolster academic integrity within the context of ChatGPT. However, as cautioned by the Eberly Center and others, relying exclusively or excessively on many of the proposed low-tech, time-limited approaches may prevent non-native English speakers, deaf and hard of hearing learners, or students with disabilities requiring laptop access during class and other accommodations from RIT’s Disability Services Office from fully demonstrating their learning.
Sequence major assignments to include project proposals/outlines, multiple drafts, annotated bibliographies
Specify the types of source materials students should use, including some that are very specific to the assignment, such as field specific journal articles that require authentication, data collection and analysis when relevant, or client assessment for field assignments
Ask students to engage in and submit a reflection about what they have learned from completing the assignment. Sample prompts include: a) Discuss the most challenging and most rewarding aspects of your project. b) What was the most surprising thing you learned in the course of this project? c) If you had the chance to do it again, what one thing would you have done differently on this project?
Have students work on peer editing and peer commentary as part of the evaluation/writing process, so that they have to comment and make suggestions and respond to other students' writing
Have students write and submit a Google Doc or Office 365 Word document where you are added as an editor so that you can see the document history
Focus on research skills and the expression of original thought, rather than creating a synthesized document
In all cases, the student should be prepared to show all materials that the tool created, labeled as such. Students should also verify the facts provided by the generative AI tool, and those verified facts can also be cited, as these types of tools will occasionally provide false information.
There is no easy answer to this question. There are several “AI detectors” that purport to tell you whether something has been generated by an artificial intelligence. It is important to note that these detectors are often only capable of predicting with some likelihood that something is AI-generated and are often wrong. Additionally, there will likely come a day when the models used to generate student content are of sufficient quality that their presence in a text or image will be undetectable. As such, it is important to not rely on these tools for the purposes of your class and instead focus on designing assignments that are tolerant of AI in the learning environment.
Turnitin has released a feature advertised as having the ability to detect AI-generated content. RIT has a license for Turnitin and it is integrated with myCourses. In testing, it has been found to be an unreliable indicator of AI-authored content and has several limitations. It is likely the tool will improve over time. However, so will AI-generated authoring and this arms race is likely to continue indefinitely. Despite the advertised capabilities of this technology, the CTL recommends pursuing the other recommendations within this document when delivering your course materials.
These links to RIT websites will provide you with tools and strategies for promoting student academic integrity but be aware that many aspects of our policies may be functionally unenforceable given the pace and complexity of generative AI.