This website requires JavaScript.

Fully Differentiable RANSAC

Tong WeiYash PatelJiri MatasDaniel Barath
Dec 2022
摘要
We propose the fully differentiable $\nabla$-RANSAC.It predicts the inlierprobabilities of the input data points, exploits the predictions in a guidedsampler, and estimates the model parameters (e.g., fundamental matrix) and itsquality while propagating the gradients through the entire procedure. Therandom sampler in $\nabla$-RANSAC is based on a clever re-parametrizationstrategy, i.e.\ the Gumbel Softmax sampler, that allows propagating thegradients directly into the subsequent differentiable minimal solver. The modelquality function marginalizes over the scores from all models estimated within$\nabla$-RANSAC to guide the network learning accurate and usefulprobabilities.$\nabla$-RANSAC is the first to unlock the end-to-end training ofgeometric estimation pipelines, containing feature detection, matching andRANSAC-like randomized robust estimation. As a proof of its potential, we train$\nabla$-RANSAC together with LoFTR, i.e. a recent detector-free featurematcher, to find reliable correspondences in an end-to-end manner. We test$\nabla$-RANSAC on a number of real-world datasets on fundamental and essentialmatrix estimation. It is superior to the state-of-the-art in terms of accuracywhile being among the fastest methods. The code and trained models will be madepublic.
展开全部
图表提取

暂无人提供速读十问回答

论文十问由沈向洋博士提出,鼓励大家带着这十个问题去阅读论文,用有用的信息构建认知模型。写出自己的十问回答,还有机会在当前页面展示哦。

Q1论文试图解决什么问题?
Q2这是否是一个新的问题?
Q3这篇文章要验证一个什么科学假设?
0
被引用
笔记
问答