This website requires JavaScript.

LLMSecEval: A Dataset of Natural Language Prompts for Security Evaluations

Catherine TonyMarkus MutasNicol\'as E. D\'iaz FerreyraRiccardo Scandariato
Mar 2023
摘要
Large Language Models (LLMs) like Codex are powerful tools for performingcode completion and code generation tasks as they are trained on billions oflines of code from publicly available sources. Moreover, these models arecapable of generating code snippets from Natural Language (NL) descriptions bylearning languages and programming practices from public GitHub repositories.Although LLMs promise an effortless NL-driven deployment of softwareapplications, the security of the code they generate has not been extensivelyinvestigated nor documented. In this work, we present LLMSecEval, a datasetcontaining 150 NL prompts that can be leveraged for assessing the securityperformance of such models. Such prompts are NL descriptions of code snippetsprone to various security vulnerabilities listed in MITRE's Top 25 CommonWeakness Enumeration (CWE) ranking. Each prompt in our dataset comes with asecure implementation example to facilitate comparative evaluations againstcode produced by LLMs. As a practical application, we show how LLMSecEval canbe used for evaluating the security of snippets automatically generated from NLdescriptions.
展开全部
图表提取

暂无人提供速读十问回答

论文十问由沈向洋博士提出,鼓励大家带着这十个问题去阅读论文,用有用的信息构建认知模型。写出自己的十问回答,还有机会在当前页面展示哦。

Q1论文试图解决什么问题?
Q2这是否是一个新的问题?
Q3这篇文章要验证一个什么科学假设?
0
被引用
笔记
问答