This website requires JavaScript.

Among Us: Adversarially Robust Collaborative Perception by Consensus

Yiming LiQi FangJiamu BaiSiheng ChenFelix Juefei-XuChen Feng
Mar 2023
Multiple robots could perceive a scene (e.g., detect objects) collaborativelybetter than individuals, although easily suffer from adversarial attacks whenusing deep learning. This could be addressed by the adversarial defense, butits training requires the often-unknown attacking mechanism. Differently, wepropose ROBOSAC, a novel sampling-based defense strategy generalizable tounseen attackers. Our key idea is that collaborative perception should lead toconsensus rather than dissensus in results compared to individual perception.This leads to our hypothesize-and-verify framework: perception results with andwithout collaboration from a random subset of teammates are compared untilreaching a consensus. In such a framework, more teammates in the sampled subsetoften entail better perception performance but require longer sampling time toreject potential attackers. Thus, we derive how many sampling trials are neededto ensure the desired size of an attacker-free subset, or equivalently, themaximum size of such a subset that we can successfully sample within a givennumber of trials. We validate our method on the task of collaborative 3D objectdetection in autonomous driving scenarios.