This website requires JavaScript.

Boosting Out-of-Distribution Detection with Multiple Pre-trained Models

Feng XueZi HeChuanlong XieFalong TanZhenguo Li
Dec 2022
摘要
Out-of-Distribution (OOD) detection, i.e., identifying whether an input issampled from a novel distribution other than the training distribution, is acritical task for safely deploying machine learning systems in the open world.Recently, post hoc detection utilizing pre-trained models has shown promisingperformance and can be scaled to large-scale problems. This advance raises anatural question: Can we leverage the diversity of multiple pre-trained modelsto improve the performance of post hoc detection methods? In this work, wepropose a detection enhancement method by ensembling multiple detectiondecisions derived from a zoo of pre-trained models. Our approach uses thep-value instead of the commonly used hard threshold and leverages a fundamentalframework of multiple hypothesis testing to control the true positive rate ofIn-Distribution (ID) data. We focus on the usage of model zoos and providesystematic empirical comparisons with current state-of-the-art methods onvarious OOD detection benchmarks. The proposed ensemble scheme shows consistentimprovement compared to single-model detectors and significantly outperformsthe current competitive methods. Our method substantially improves the relativeperformance by 65.40% and 26.96% on the CIFAR10 and ImageNet benchmarks.
展开全部
图表提取

暂无人提供速读十问回答

论文十问由沈向洋博士提出,鼓励大家带着这十个问题去阅读论文,用有用的信息构建认知模型。写出自己的十问回答,还有机会在当前页面展示哦。

Q1论文试图解决什么问题?
Q2这是否是一个新的问题?
Q3这篇文章要验证一个什么科学假设?
0
被引用
笔记
问答