Graph inductive biases in transformers without message passing
Transformers for graph data are increasingly widely studied and successful in numerous learning tasks. Graph inductive biases are crucial for Graph Transformers, and previous works incorporate them using message-passing modules and/or positional encodings. However, Graph Transformers that use messag...
Main Authors: | , , , , , , , |
---|---|
Format: | Conference item |
Language: | English |
Published: |
Proceedings of Machine Learning Research
2023
|