This website requires JavaScript.

DyFormer: A Scalable Dynamic Graph Transformer with Provable Benefits on Generalization Ability

Weilin CongYanhong WuYuandong Tian ...+3 Mehrdad Mahdavi
Nov 2021
摘要
Transformers have achieved great success in several domains, includingNatural Language Processing and Computer Vision. However, its application toreal-world graphs is less explored, mainly due to its high computation cost andits poor generalizability caused by the lack of enough training data in thegraph domain. To fill in this gap, we propose a scalable Transformer-likedynamic graph learning method named Dynamic Graph Transformer (DyFormer) withspatial-temporal encoding to effectively learn graph topology and captureimplicit links. To achieve efficient and scalable training, we proposetemporal-union graph structure and its associated subgraph-based node samplingstrategy. To improve the generalization ability, we introduce two complementaryself-supervised pre-training tasks and show that jointly optimizing the twopre-training tasks results in a smaller Bayesian error rate via aninformation-theoretic analysis. Extensive experiments on the real-worlddatasets illustrate that DyFormer achieves a consistent 1%-3% AUC gain(averaged over all time steps) compared with baselines on all benchmarks.
展开全部
图表提取

暂无人提供速读十问回答

论文十问由沈向洋博士提出,鼓励大家带着这十个问题去阅读论文,用有用的信息构建认知模型。写出自己的十问回答,还有机会在当前页面展示哦。

Q1论文试图解决什么问题?
Q2这是否是一个新的问题?
Q3这篇文章要验证一个什么科学假设?
0
被引用
笔记
问答