Real or Fake?

Illustration of a person's face made up of pieces of multiple people's faces.
Real or
Fake?

Illustration by John W. Tomac

RIT experts are helping journalists and intelligence analysts detect digital deception

Seeing is believing. Well, it used to be, anyway.

Today, artificial intelligence (AI) is being used to manipulate media.

It can face-swap celebrities. It allowed a de-aged Luke Skywalker to guest star in The Mandalorian. It also falsely showed Ukrainian President Volodymyr Zelensky surrendering to the Russian invasion.

Deepfakes are videos, audio, or images that have been altered using AI. In a deepfake, people can be shown saying and doing things that they have never said or done.

This capability has profound implications for entertainment, politics, journalism, and national security. As deepfakes become more convincing, the challenge of distinguishing fact from fiction grows, threatening the credibility of news sources and the stability of democratic institutions.

At RIT, a team of student and faculty researchers is leading the charge to help journalists and intelligence analysts figure out what is real and what is fake. Their work, called the DeFake Project, has more than $2 million in funding from the National Science Foundation and Knight Foundation.

The RIT team aims to mobilize the best deepfake detectors around—observant humans armed with the right tools. “There is real danger in shiny new deepfake detectors that confidently offer often inaccurate results,” said Saniat (John) Sohrawardi, a computing and information sciences Ph.D. student leading the DeFake Project. “We need to provide journalists—and other experts who vet reality—with forensic tools that help them make decisions, not make the decisions for them.”

Journalists agree and they are working with RIT.

Scott Morgan, a reporter and producer with South Carolina Public Radio, said that it’s increasingly harder to spot a fake and a good detector tool would be invaluable. He said he’s often relying on a “would that person really have said that” kind of approach.

“And ultimately, that’s what DeFake is trying to be—a tool that supplements the journalist’s gut feeling and complements old-fashioned legwork, but doesn’t replace them,” said Morgan. “Because even an AI-driven program that analyzes videos for the teeny-tiniest of clues that it might have been doctored shouldn’t be left to make decisions about what to do with that information or disinformation.”

Spotting the fake

Matthew Wright, endowed professor and chair of the Department of Cybersecurity, first saw a high-quality deepfake lip sync of President Obama in 2017. He called it a real “OMG moment.”

“It was really disconcerting,” said Wright. “The potential to use this to make misinformation and disinformation is tremendous.”

As an expert in adversarial machine learning, Wright was studying how AI can impact cybersecurity for good and bad. Deepfakes seemed like a valuable offshoot of this.

In 2019, Wright and the newly formed DeFake Project team answered a call from the Ethics and Governance of Artificial Intelligence Initiative to build a deepfake detector. After developing some specialized techniques, their detector worked perfectly on curated deepfake datasets—it had 100-percent accuracy. Then they pulled up some YouTube videos to run through their detector.

Five people in an office environment.

“It would make mistakes,” said Wright. “But this wasn’t just our design. There is a cottage industry around developing deepfake detectors and none of these are foolproof, despite the claims of the company.”

Detectors can become confused when video is even slightly altered, clipped out of context, or compressed. For example, in 2019, a Myanmar news outlet used a publicly available deepfake detector to analyze a video of a chief minister confessing to a bribe. The tool was 90-percent confident that the video was fake, yet expert analysis later determined it was in fact real.

“Users tend to trust the output of decision-making tools too much,” said Sohrawardi. “You shouldn’t make a judgment based on percentage alone.”

That’s why the DeFake Project is so important, said Andrea Hickerson, dean and professor of the School of Journalism and New Media at The University of Mississippi and a member of the project. The goal is to make a tool that journalists can actually use.

“If a trusted journalist accidentally shares a deepfake, it would reach a wide audience and undermine trust in the individual and the profession as a whole,” said Hickerson, the former director of RIT’s School of Communication.

“Journalists have important contextual expertise that can be paired with a deepfake detection tool to make informed judgments on the authenticity of a video and its newsworthiness.”

To better understand the journalistic process, the DeFake researchers interviewed 24 reporters, ranging from national broadcast networks to local print media. Taking inspiration from a popular tabletop game, the team created a role-playing exercise called Dungeons & Deepfakes. The journalists were placed in a high-stakes newsroom scenario and asked to verify videos using traditional methods and deep-learning-based detection tools.

The team observed that journalists diligently verify information, but they too have the potential to over rely on detection tools, just like in the Myanmar incident.

Most of all, journalists saw the overall fakeness score and had a healthy skepticism. They needed insight into its calculation. Unfortunately, AI is not inherently good at explaining the rationale behind its decisions.

Unboxing the black box

When Pamposh Raina is asked to investigate a potential deepfake, she checks with multiple sources and often reaches out to RIT’s experts.

She is an experienced reporter who has worked with The New York Times, written for international publications, and currently heads the Deepfakes Analysis Unit at the Misinformation Combat Alliance, which is helping fight AI-generated misinformation in India.

One clip she questioned was being passed around social media in 2024. It was a video in Hindi that apparently featured Yogi Adityanath, chief minister of the most populated state in India, promoting a pilot gaming platform as a quick means to make a financial gain.

After running the video through detection tools from Hive AI, TrueMedia, and escalating to ElevenLabs for audio analysis, the investigators wanted an expert view on possible AI tampering around Adityanath’s mouth area in the video.

The DeFake team noted that the chief minister’s mouth animation looked disjointed and could be a result of the algorithm failing to extract proper facial landmarks. Ultimately, the Deepfakes Analysis Unit concluded that the video was fake and Adityanath did not utter the words attributed to him.

Three people in an office environment looking at a screen that is running a deepfake detection program.

Creating meaningful tools like this is why Kelly Wu, a computing and information sciences Ph.D. student, came to RIT. After completing her undergraduate degrees in mathematics and economics at Georgetown University, Wu jumped at the chance to research deepfakes with the RIT team.

“Right now, there is a huge gap between the user and detection tools, and we need to collaborate to bring that together,” said Wu. “We care about how it will transition into people’s hands.”

Just like human brains, AI systems identify trends and make predictions. And just like in humans, it’s not always clear how a model comes to any particular conclusion.

Wu is figuring out how to unbox that AI black box. She aims to produce explanations that are both faithful to the AI model and interpretable by humans.

A lot of today’s detection tools use heatmaps to present explanations of results. A blob of dark red highlighting the eye region signifies that this area is more important for the model’s decision-making process.

“But, even to me, it just looks like a normal eye,” said Wu. “I need to know why the model thinks this is important.”

The DeFake tool will highlight areas and provide detailed text explanations. The detector displays information on the processed content, including metadata, overall fakeness, top fake faces, and an estimation of the deepfake manipulation method used. It also incorporates provenance technology, extracting Content Credentials—a new kind of tamper-evident metadata. Due to the resource-intensive nature of AI, the tool allows people to assess specific snippets of a video.

Most recently, the DeFake Project, which now has nine members from three universities, is expanding to meet the needs of intelligence analysts.

In 2023, RIT earned a grant to work with the Department of Defense on bolstering national security and improving intelligence analysis.

RIT’s team is interviewing analysts and using their insights to help create a Digital Media Forensic Ontology that makes the terminology of manipulated media detection methods clearer and more consistent. Analysts can use the DeFake all-in-one platform along with the ontology to narrow down why content needs to be analyzed, where in the media analysts should focus their attention, and what artifacts they should look for.

Candice Gerstner, an applied research mathematician with the Department of Defense, is collaborating on the project. She said that when analysts write a report that will be passed up the chain, they need to be sure that information has integrity.

“I’m not satisfied with a single detector that says 99 percent—I want more,” said Gerstner. “Having tools that are easily adaptable to new techniques and that continue to strive for explainability and low error rates is extremely important.”

In the future, the DeFake Project plans to expand to law enforcement, who are worried about fake evidence getting into the court system. RIT students are also researching reinforcement learning to limit bias and make sure AI models are fair.

A person pointing over the shoulder of another person towards a computer monitor.

Akib Shahriyar, a computing and information sciences Ph.D. student, is taking it one step further. He’s attacking the underlying model that powers the DeFake tool to uncover its weaknesses.

“In the end, we’re not just creating a detector and throwing it out there, where it could be exploited by adversaries,” said Shahriyar. “We’re building trust with the users by taking a responsible approach to deepfake detection.”

How to identify a deepfake

Although RIT’s DeFake tool is not publicly available, here are some common ways to identify fake content.


Artifacts in the face
Look for inconsistencies in eye reflections and gaze patterns. Anomalies may occur in the face—unnatural smoothness, absence of outlines of individual teeth, and irregular facial hair.


Body posture
Deepfakes prioritize altering facial features, so body movements could appear odd or jerky.


Audio discrepancies
Does the audio sync seamlessly with the speaker’s mouth movements?


Contextual analysis
Consider the broader context, including the source, timestamps, and post history.


External verification
Do a reverse image search and try contacting the original sources.


Check the news
Look for reports about the content in reputable news sites.

How do deepfakes work?

The process uses AI deep learning algorithms to analyze thousands of images and videos of the person being replicated. The neural network then recognizes patterns, like facial features, so it can continuously generate new ones.