Equity, Privacy, and Academic Integrity in Generative AI

All teaching and learning strategies, even ones that incorporate generative AI, have their advantages and disadvantages. Below are high-level overviews of some of the important considerations regarding generative AI tools. Each item on its own can be a deep topic to explore, which is outside the scope of this overview. 

Concerns

Generative AI tools learn from patterns in their training data. Their outputs are generally the prediction of the most likely pattern based on your input. Because these tools don't have an understanding of the material they generate or a sense of true or false, the items they output may not be accurate. The outputs may also be convincing as plausible even though they are inaccurate. Users of generative AI tools still need a base level of understanding of the topics they are using with these tools to help them critically evaluate the outputs.

Due to the way that generative AI pulls information and learns, there are several ways in which AI can reinforce or amplify bias. At a basic level, generative AI is a prediction model based on a large set of data. It leverages frequently occurring patterns in that data, and certain patterns will occur less frequently than others because there is less data (e.g. certain populations are not represented at all or are represented less frequently). This can result in things being missing from the results or incorrect assumptions being made. If the tool pulls from sources that exhibit biased assumptions or are not diverse, then the biased result will be reflected in the output.

There is also the potential for privacy concerns with respect to personal data, intellectual property, and copyrighted data. Generative AI is trained on data, and for some tools, the prompts you use are being used by the tool's developer to train the model further. Content you put into the tool may become part of the tool. Once trained, the AI could respond to another user in the future with this information or very similar information, and without attribution to the original owner.

All RIT faculty, staff, and students have access to Microsoft Co-Pilot with commercial data protection. Using RIT’s protected version of Co-Pilot provides us with more protection than publicly-available tools like ChatGPT.

Some tools are currently in free public beta. As sponsoring organizations look towards monetization, tools may be available only to individuals with the financial means to use the tool. This creates an access issue that you will want to be aware of.

Generative AI models take a lot of computing power to train and a lot of computing power to run. The energy consumption, water use, and greenhouse gas emissions when developing and using these tools are some of the main topics of conversation in this area.

Some articles in the higher education press have suggested that assigning in-class, hand-written or oral work is the most effective way to bolster academic integrity within the context of generative AI. However, relying exclusively or excessively on many of the proposed low-tech, time-limited approaches may prevent non-native English speakers, deaf and hard of hearing learners, or students with disabilities requiring laptop access during class and other accommodations from RIT’s Disability Services Office from fully demonstrating their learning.

There are several “AI detectors” that purport to tell you whether something has been generated by artificial intelligence. It is important to note that these detectors are often only capable of predicting with some likelihood that something is AI-generated and are often wrong . Additionally, AI models are constantly improving and new tools are available all the time, so generative AI detectors may not be able to keep up with advancements. Furthermore, generative AI is starting to be integrated into many of the everyday tools that students use, so AI-generated content within a file may be so interwoven with human-generated content that it cannot be differentiated.. As such, it is important to not rely on these tools for the purposes of your class. Instead, the CTL recommends setting clear expectations with your students about generative AI use in your course. For more information, review Generative AI: Syllabus Guidance. 

Last updated 12/24/2024