Deep learning models in science: some risks and opportunities

Abstract

Deep neural networks offer striking improvements in predictive accuracy in many areas of science, and in biological sequence modeling in particular. But that predictive power comes at a steep price: we must give up on interpretability. In this talk, I argue - contrary to many voices in AI ethics calling for more interpretable models - that this is a price we should be willing to pay.

Date
Jun 11, 2024
Location
Jülich/Düsseldorf