Home
Experience
Projects
Talks
Publications
Contact
Recent & Upcoming Talks
2024
Anthropocentric bias and the possibility of artificial cognition
Much has been written about anthropomorphic bias in the study of LLMs. Here we discuss various kinds of anthropocentric bias.
Jul 26, 2024
Vienna
Charles Rathopf
,
Raphael Milliere
Follow
Extending ourselves with generative AI
Jun 15, 2024
Paris
Follow
Deep learning models in science: some risks and opportunities
Under some conditions, we ought to trade interpretability for predictive power.
Jun 11, 2024
Jülich/Düsseldorf
Follow
Cognitive ontology for large language models
This talk describes some of the conceptual and methodological difficulties involved in articulating the cognitive capacities of large language models.
Apr 26, 2024
Dubrovnik
Follow
Two constraints on the neuroscience of content
This talk describes theoretical constraints on current attempts to work decode mental content from brain data.
Mar 21, 2024 4:00 PM
Antwerp
Follow
2023
Do large language models believe?
This is a talk about whether LLMs can be said to have beliefs.
Nov 9, 2023
Erlangen, Germany
Follow
Might deep learning vindicate functionalism?
Deep neural networks optimized to perform object recognition tasks predict patterns of neural activation in humans and monkeys, despite not having been trained on brain data. I discuss whether this can be viewed as a case of multiple realization.
Nov 9, 2023 12:00 AM
Warsaw, Poland
Follow
Might deep learning vindicate functionalism?
Deep neural networks optimized to perform object recognition tasks predict patterns of neural activation in humans and monkeys, despite not having been trained on brain data. I discuss whether this can be viewed as a case of multiple realization.
Nov 9, 2023 12:00 AM
Online
Follow
Culpability and control in BCI-mediated action
This is a talk about brain-computer interfaces and their relationship to intentional mental states.
Jul 8, 2023 2:00 PM
London
Follow
Strange error and the possibility of machine knowledge
Rather than merely demonstrating the fragility of ML models, strange error might be evidence of hidden knowledge.
Jan 1, 2023 12:00 AM
Stuttgart
Follow
2021
Strange risk in AI ethics
Where ML models are used as the centerpiece of an epistemic classification procedure, reliability is not sufficient for ethical use. The nature of classification errors should be taken into account.
Dec 1, 2021 2:00 PM
Delft University of Technology
Follow
Knowledge transfer from machine learning to neuroscience
An invited talk for the Max Planck School of Cognition
Dec 1, 2021 2:00 PM
Berlin
Follow
Culpability and control in BCI-mediated action
A neuroethics talk for our large neuroscience group in Jülich. The accompanying paper will be a chapter in a forthcoming neuroethics book.
Dec 1, 2021 2:00 PM
Jülich
Follow
Cite
×