This website requires JavaScript.

Speech Modeling with a Hierarchical Transformer Dynamical VAE

Xiaoyu LinXiaoyu BieSimon LeglaiveLaurent GirinXavier Alameda-Pineda
Mar 2023
摘要
The dynamical variational autoencoders (DVAEs) are a family oflatent-variable deep generative models that extends the VAE to model a sequenceof observed data and a corresponding sequence of latent vectors. In almost allthe DVAEs of the literature, the temporal dependencies within each sequence andacross the two sequences are modeled with recurrent neural networks. In thispaper, we propose to model speech signals with the Hierarchical TransformerDVAE (HiT-DVAE), which is a DVAE with two levels of latent variable(sequence-wise and frame-wise) and in which the temporal dependencies areimplemented with the Transformer architecture. We show that HiT-DVAEoutperforms several other DVAEs for speech spectrogram modeling, while enablinga simpler training procedure, revealing its high potential for downstreamlow-level speech processing tasks such as speech enhancement.
展开全部
图表提取

暂无人提供速读十问回答

论文十问由沈向洋博士提出,鼓励大家带着这十个问题去阅读论文,用有用的信息构建认知模型。写出自己的十问回答,还有机会在当前页面展示哦。

Q1论文试图解决什么问题?
Q2这是否是一个新的问题?
Q3这篇文章要验证一个什么科学假设?