A Temporally Augmented Graph Attention Network for Affordance Classification
Summary
EEG-tGAT is a temporally augmented Graph Attention Network that improves affordance classification from interaction sequences by incorporating temporal attention and dropout mechanisms. The model enhances GATv2 for sequential data where temporal dimensions are semantically non-uniform.
View Cached Full Text
Cached at: 04/20/26, 08:29 AM
Paper page - A Temporally Augmented Graph Attention Network for Affordance Classification
Source: https://huggingface.co/papers/2604.10149
Abstract
EEG-tGAT enhances Graph Attention Networks by incorporating temporal attention and dropout mechanisms to improve affordance classification from interaction sequences.
Graph attention networks (https://huggingface.co/papers?q=Graph%20attention%20networks) (GATs) provide one of the best frameworks for learning node representations in relational data; but, existing variants such as Graph Attention Network (GAT) mainly operate on static graphs and rely on implicit temporal aggregation when applied to sequential data. In this paper, we introduce Electroencephalography (https://huggingface.co/papers?q=Electroencephalography)-temporal Graph Attention Network (EEG-tGAT), a temporally augmented formulation of GATv2 (https://huggingface.co/papers?q=GATv2) that is tailored for affordance classification (https://huggingface.co/papers?q=affordance%20classification) from interaction sequences (https://huggingface.co/papers?q=interaction%20sequences). The proposed model incorporates temporal attention (https://huggingface.co/papers?q=temporal%20attention) to modulate the contribution of different time segments and temporal dropout (https://huggingface.co/papers?q=temporal%20dropout) to regularize learning across temporally correlated observations. The design reflects the assumption that temporal dimensions in affordance data are not semantically uniform and that discriminative information may be unevenly distributed across time. Experimental results on affordance datasets show that EEG-tGAT achieves improved classification performance compared to GATv2 (https://huggingface.co/papers?q=GATv2). The observed gains help us conclude that explicitly encoding temporal importance and enforcing temporal robustness introduce inductive biases (https://huggingface.co/papers?q=inductive%20biases) that are much better aligned with the structure of affordance-driven interaction data. These findings show that modest architectural changes to graph attention models can help achieve consistent benefits when temporal relationships play a nontrivial role in the task.
View arXiv page (https://arxiv.org/abs/2604.10149) View PDF (https://arxiv.org/pdf/2604.10149) Add to collection (https://huggingface.co/login?next=%2Fpapers%2F2604.10149)
Get this paper in your agent:
hf papers read 2604.10149
Don’t have the latest CLI? curl -LsSf https://hf.co/cli/install.sh | bash
Models citing this paper0
No model linking this paper
Cite arxiv.org/abs/2604.10149 in a model README.md to link it from this page.
Datasets citing this paper0
No dataset linking this paper
Cite arxiv.org/abs/2604.10149 in a dataset README.md to link it from this page.
Spaces citing this paper0
No Space linking this paper
Cite arxiv.org/abs/2604.10149 in a Space README.md to link it from this page.
Collections including this paper0
No Collection including this paper
Add this paper to a collection (https://huggingface.co/new-collection) to link it from this page.
Similar Articles
Target-Oriented Pretraining Data Selection via Neuron-Activated Graph
This paper introduces Neuron-Activated Graph (NAG) Ranking, a training-free framework for selecting pretraining data aligned with target tasks by identifying and ranking candidate data based on similarity in neuron activation patterns. The approach achieves 4.9% average improvement over random sampling and demonstrates that sparse neuron patterns capture functional capabilities for target learning.
Target-Oriented Pretraining Data Selection via Neuron-Activated Graph
Introduces Neuron-Activated Graph Ranking, a training-free method that uses sparse high-impact neuron sets to select pretraining data for target tasks, boosting average benchmark performance by 4.9%.
Robustness of Graph Self-Supervised Learning to Real-World Noise: A Case Study on Text-Driven Biomedical Graphs
This paper introduces NATD-GSSL, a framework evaluating the robustness of Graph Self-Supervised Learning on noisy, text-driven biomedical graphs. It demonstrates that certain GNN architectures and pretext tasks maintain performance despite real-world noise, offering practical guidance for unsupervised learning in imperfect datasets.
GCCM: Enhancing Generative Graph Prediction via Contrastive Consistency Model
This paper introduces GCCM, a graph contrastive consistency model that improves generative graph prediction by mitigating shortcut solutions in consistency training through negative pairs and feature perturbation.
Generative modeling with sparse transformers
OpenAI introduces the Sparse Transformer, a deep neural network that improves the attention mechanism from O(N²) to O(N√N) complexity, enabling modeling of sequences 30x longer than previously possible across text, images, and audio. The model uses sparse attention patterns and checkpoint-based memory optimization to train networks up to 128 layers deep, achieving state-of-the-art performance across multiple domains.