A Generalized EigenGame with Extensions to Multiview Representation Learning
James ChapmanAna Lawry AguilaLennie Wells
James ChapmanAna Lawry AguilaLennie Wells
Nov 2022
0被引用
0笔记
读论文,拿好礼活动火爆进行中,iPad、蓝牙耳机、拍立得、键盘鼠标套装等你来拿!
摘要原文
Generalized Eigenvalue Problems (GEPs) encompass a range of interestingdimensionality reduction methods. Development of efficient stochasticapproaches to these problems would allow them to scale to larger datasets.Canonical Correlation Analysis (CCA) is one example of a GEP for dimensionalityreduction which has found extensive use in problems with two or more views ofthe data. Deep learning extensions of CCA require large mini-batch sizes, andtherefore large memory consumption, in the stochastic setting to achieve goodperformance and this has limited its application in practice. Inspired by theGeneralized Hebbian Algorithm, we develop an approach to solving stochasticGEPs in which all constraints are softly enforced by Lagrange multipliers. Thenby considering the integral of this Lagrangian function, its pseudo-utility,and inspired by recent formulations of Principal Components Analysis and GEPsas games with differentiable utilities, we develop a game-theory inspiredapproach to solving GEPs. We show that our approaches share much of thetheoretical grounding of the previous Hebbian and game theoretic approaches forthe linear case but our method permits extension to general functionapproximators like neural networks for certain GEPs for dimensionalityreduction including CCA which means our method can be used for deep multiviewrepresentation learning. We demonstrate the effectiveness of our method forsolving GEPs in the stochastic setting using canonical multiview datasets anddemonstrate state-of-the-art performance for optimizing Deep CCA.