This website requires JavaScript.

Among Us: Adversarially Robust Collaborative Perception by Consensus

Yiming LiQi FangJiamu BaiSiheng ChenFelix Juefei-XuChen Feng
Mar 2023
摘要
Multiple robots could perceive a scene (e.g., detect objects) collaborativelybetter than individuals, although easily suffer from adversarial attacks whenusing deep learning. This could be addressed by the adversarial defense, butits training requires the often-unknown attacking mechanism. Differently, wepropose ROBOSAC, a novel sampling-based defense strategy generalizable tounseen attackers. Our key idea is that collaborative perception should lead toconsensus rather than dissensus in results compared to individual perception.This leads to our hypothesize-and-verify framework: perception results with andwithout collaboration from a random subset of teammates are compared untilreaching a consensus. In such a framework, more teammates in the sampled subsetoften entail better perception performance but require longer sampling time toreject potential attackers. Thus, we derive how many sampling trials are neededto ensure the desired size of an attacker-free subset, or equivalently, themaximum size of such a subset that we can successfully sample within a givennumber of trials. We validate our method on the task of collaborative 3D objectdetection in autonomous driving scenarios.
展开全部
图表提取

暂无人提供速读十问回答

论文十问由沈向洋博士提出,鼓励大家带着这十个问题去阅读论文,用有用的信息构建认知模型。写出自己的十问回答,还有机会在当前页面展示哦。

Q1论文试图解决什么问题?
Q2这是否是一个新的问题?
Q3这篇文章要验证一个什么科学假设?