Skip to main content
Raphaël Millière
UT Austin
-
RLP 4.422 Computational Linguistics Seminar Room

Artificial Competence

Raphaël Millière
Associate Professor, University of Oxford

Title: Artificial Competence

Abstract: AI systems increasingly match or surpass humans on complex tasks, yet they often exhibit surprising failure modes or inconsistent behaviour across evaluation contexts. Cognitive science relies on the distinction between competence and performance to explain similar discrepancies in humans, but this distinction is often framed in terms that preclude its straightforward application to artificial neural networks. This paper develops a unified account of competence applicable to both biological and artificial systems, locating competence at the algorithmic level of analysis. On this view, a system is competent in a domain when it implements an algorithm that reliably generalises across that domain. Importantly, the relevant notion of implementation applies to neural networks when formalised under causal abstraction: a neural network implements an algorithm if there exists a mapping between the network's components and the algorithm's variables such that both respond identically to causal interventions. This framework provides a principled way to distinguish competence from auxiliary factors that affect performance across systems with very different constraints and architectures. It thereby accounts for double dissociations between performance and competence in both humans and AI systems, and offers a template for designing competence-sensitive evaluation in cognitive science and AI.

Speaker Bio: Raphaël Millière is an Associate Professor at the University of Oxford and a Fellow of Jesus College, with an affiliation at the Institute for Ethics in AI. He also holds an AI2050 Fellowship from Schmidt Sciences. He was a Lecturer (Assistant Professor) at Macquarie University in Sydney, and a Presidential Scholar at Columbia University in New York. He completed his PhD at Oxford.  Millière's research mainly focuses on understanding modern artificial neural networks, such as large language models, through theoretical analysis, behavioral evaluation, and interpretability methods. Drawing on philosophy and cognitive science, he aims to establish frameworks for fair and meaningful comparisons between machines and humans in domains such as language processing and reasoning. In turn, he uses insights from studying neural networks to inform theories of human cognition. He also has an ongoing interest in issues related to AI safety, (self-)consciousness, mental representation, and comparative psychology.