This website requires JavaScript.

Distribution-aware $\ell_1$ Analysis Minimization

Raziyeh TakbiriSajad Daei
Dec 2022
摘要
This work is about recovering an analysis-sparse vector, i.e. sparse vectorin some transform domain, from under-sampled measurements. In real-worldapplications, there often exist random analysis-sparse vectors whosedistribution in the analysis domain are known. To exploit this information, aweighted $\ell_1$ analysis minimization is often considered. The task ofchoosing the weights in this case is however challenging and non-trivial. Inthis work, we provide an analytical method to choose the suitable weights.Specifically, we first obtain a tight upper-bound expression for the expectednumber of required measurements. This bound depends on two critical parameters:support distribution and expected sign of the analysis domain which are bothaccessible in advance. Then, we calculate the near-optimal weights byminimizing this expression with respect to the weights. Our strategy works forboth noiseless and noisy settings. Numerical results demonstrate thesuperiority of our proposed method. Specifically, the weighted $\ell_1$analysis minimization with our near-optimal weighting design considerably needsfewer measurements than its regular $\ell_1$ analysis counterpart.
展开全部
图表提取

暂无人提供速读十问回答

论文十问由沈向洋博士提出,鼓励大家带着这十个问题去阅读论文,用有用的信息构建认知模型。写出自己的十问回答,还有机会在当前页面展示哦。

Q1论文试图解决什么问题?
Q2这是否是一个新的问题?
Q3这篇文章要验证一个什么科学假设?
0
被引用
笔记
问答