Generative AI systems are disposed to ‘hallucinate,’ or fabricate incorrect answers. But they are also used for a variety of scientific modeling tasks. In this talk I investigate how hallucination threatens the reliability of scientific inference, and how that threat can be mitigated.