This website requires JavaScript.

Protecting Society from AI Misuse: When are Restrictions on Capabilities Warranted?

Markus AnderljungJulian Hazell
Mar 2023
摘要
Artificial intelligence (AI) systems will increasingly be used to cause harmas they grow more capable. In fact, AI systems are already starting to be usedto automate fraudulent activities, violate human rights, create harmful fakeimages, and identify dangerous toxins. To prevent some misuses of AI, we arguethat targeted interventions on certain capabilities will be warranted. Theserestrictions may include controlling who can access certain types of AI models,what they can be used for, whether outputs are filtered or can be traced backto their user, and the resources needed to develop them. We also contend thatsome restrictions on non-AI capabilities needed to cause harm will be required.Though capability restrictions risk reducing use more than misuse (facing anunfavorable Misuse-Use Tradeoff), we argue that interventions on capabilitiesare warranted when other interventions are insufficient, the potential harmfrom misuse is high, and there are targeted ways to intervene on capabilities.We provide a taxonomy of interventions that can reduce AI misuse, focusing onthe specific steps required for a misuse to cause harm (the Misuse Chain), anda framework to determine if an intervention is warranted. We apply thisreasoning to three examples: predicting novel toxins, creating harmful images,and automating spear phishing campaigns.
展开全部
图表提取

暂无人提供速读十问回答

论文十问由沈向洋博士提出,鼓励大家带着这十个问题去阅读论文,用有用的信息构建认知模型。写出自己的十问回答,还有机会在当前页面展示哦。

Q1论文试图解决什么问题?
Q2这是否是一个新的问题?
Q3这篇文章要验证一个什么科学假设?
0
被引用
笔记
问答