AI empiricism: the only game in town? A Cognitive Science Speaker Series Presentation

Event Image
Graphic promoting the "Cognitive Science Speaker Series" for Fall 2025. The title is prominently displayed in blue and white text on an orange banner, with a light abstract background featuring hexagonal molecular patterns.

Speaker: Brett Karlan, Ph.D.

Title: AI empiricism: the only game in town?

Short Bio: Brett Karlan is an assistant professor in philosophy at Purdue University. He earned his PhD in philosophy from Princeton University, and previously held postdoctoral positions in the McCoy Family Center for Ethics in Society at Stanford University and the Department of History and Philosophy of Science at the University of Pittsburgh. He earned bachelor’s degrees in philosophy and neurobiology at the University of Iowa (go Hawks).

Brett’s work is primarily focused on the intersection of the philosophy of science (especially cognitive science and artificial intelligence) and normative philosophy (especially epistemology, ethics, and the philosophy of action). His work has been published in the Australasian Journal of Philosophy, Ratio, the Journal of Experimental and Theoretical Artificial Intelligence, and Neuropsychology, among other venues. He has also been a researcher on two interdisciplinary grants funded by the Templeton World Charity Foundation, bringing philosophical reflection
into direct contact with computer scientists, psychologists, engineers, and animal cognition researchers.

Abstract: I offer an epistemic argument against the dominance of empiricism and empiricist-inspired methods in contemporary machine learning (ML) research. I first establish, as many ML researchers and philosophers of ML claim, that standard methods for constructing deep learning networks are best thought of as a kind of empiricism about cognitive architecture. I then argue that, even given the resounding success of contemporary ML models, there are few (if any) strong reasons to interpret their success as ruling out competing nativist approaches to cognitive architecture, especially given the extreme resource capture that empiricist methods have undergone in ML research. If we care about research goals like modeling human cognition or building genuinely rational machines, I argue we have good reason to support a robust competing artificial intelligence (AI) nativist tradition. I show how this argument applies to two recent defenses of AI empiricism in the philosophical literature: Buckner (2023)’s book-length defense of moderate AI empiricism, and Long (2024)’s dilemma argument against AI nativism.

ASL-English interpreters have been requested. Light refreshments will be provided.


Contact
Frances Cooley
This event may be photographed/videoed by RIT. By attending, you grant Rochester Institute of Technology (“RIT”) permission to use photographs and/or audio-visual recordings in which you appear, now and in the future, for illustration, promotion, art, editorial, advertising, and trade, or any other purpose.
Event Snapshot
When and Where
November 14, 2025
11:00 am - 12:00 pm
Room/Location: WAL-3440
Who

Open to the Public

Interpreter Requested?

Yes

Topics
imaging science
research