This website requires JavaScript.

Characterizing Manipulation from AI Systems

Micah CarrollAlan ChanHenry AshtonDavid Krueger
Mar 2023
摘要
Manipulation is a common concern in many domains, such as social media,advertising, and chatbots. As AI systems mediate more of our interactions withthe world, it is important to understand the degree to which AI systems mightmanipulate humans \textit{without the intent of the system designers}. Our workclarifies challenges in defining and measuring manipulation in the context ofAI systems. Firstly, we build upon prior literature on manipulation from otherfields and characterize the space of possible notions of manipulation, which wefind to depend upon the concepts of incentives, intent, harm, and covertness.We review proposals on how to operationalize each factor. Second, we propose adefinition of manipulation based on our characterization: a system is manipulative \textit{if it acts as if it were pursuing anincentive to change a human (or another agent) intentionally and covertly}.Third, we discuss the connections between manipulation and related concepts,such as deception and coercion. Finally, we contextualize ouroperationalization of manipulation in some applications. Our overall assessmentis that while some progress has been made in defining and measuringmanipulation from AI systems, many gaps remain. In the absence of a consensus definition and reliable toolsfor measurement, we cannot rule out the possibility that AI systems learn tomanipulate humans without the intent of the system designers. We argue that such manipulation poses a significant threat to human autonomy,suggesting that precautionary actions to mitigate it are warranted.
展开全部
图表提取

暂无人提供速读十问回答

论文十问由沈向洋博士提出,鼓励大家带着这十个问题去阅读论文,用有用的信息构建认知模型。写出自己的十问回答,还有机会在当前页面展示哦。

Q1论文试图解决什么问题?
Q2这是否是一个新的问题?
Q3这篇文章要验证一个什么科学假设?
0
被引用
笔记
问答