Black boxes and confirmation bias in media forensics
As deep learning reshapes media forensics, black-box models are becoming the norm: powerful, but inscrutable, and increasingly difficult to question. In the rush to boost performance, the field risks sidelining interpretability and scientific rigor. Efforts in explainable AI are struggling to keep pace, and the danger is clear: without understanding why a detector works, we may be building forensic tools on shaky ground. It wouldn’t be the first time: other forensic disciplines have already stumbled here, as the NAS report famously revealed.
In this talk, I’ll argue that working with black boxes demands a mindset that is in a way closer to biology or psychology than computer science: propose a theory, but try hard to break it. Too often, we cling to pet explanations and fall into the trap of confirmation bias, especially when experimental design is loose and data leakage slips in unnoticed. I’ll draw on real examples, especially in camera source attribution, to show how the absence of competing hypotheses has led to flawed conclusions. And I’ll share ideas on how to frame stronger alternative