Tuesday, February 10th 2026 at 14:30, I will be defending my thesis in Salle Aurigny at INRIA Centre de l'UniversitΓ© de Rennes.
I will post the meeting link shortly before the defense.
I will post the manuscript link in January.
Machine learning-based prediction services are now widely deployed across industries by companies, governments, and individuals. Yet, these services often rely on a complex AI supply chain, whose components (training data, models, infrastructure), while critical to their performance, are partially or completely hidden to the final users. Thus, to an external user or regulator, these prediction services appear as black-boxes, complicating their evaluation and opening avenues for manipulations. In the presence of deceptive model providers, this thesis aims to understand the fundamental limits to black-box auditing and designing protocols to provide guarantees beyond the black-box interaction model. This manuscript presents three contributions towards that goal. First, I present a formalization of this quest for the minimal assumption beyond the black-box as a prior construction problem and provide a new audit method leveraging the labeled data available to the auditor. Then, I study the benefits of requesting the hypothesis class used by the platform to inform the audit. Finally, in an attempt to cheaply detect post-audit attacks, I introduce a new model fingerprint baseline and theoretical analysis to detect model change.