A Temporally Augmented Graph Attention Network for Affordance Classification

Hugging Face Daily Papers Papers

Summary

EEG-tGAT is a temporally augmented Graph Attention Network that improves affordance classification from interaction sequences by incorporating temporal attention and dropout mechanisms. The model enhances GATv2 for sequential data where temporal dimensions are semantically non-uniform.

Graph attention networks (GATs) provide one of the best frameworks for learning node representations in relational data; however, existing variants such as Graph Attention Network (GAT) primarily operate on static graphs and rely on implicit temporal aggregation when applied to sequential data. In this paper, we introduce Electroencephalography-temporal Graph Attention Network (EEG-tGAT), a temporally augmented formulation of GATv2 designed for affordance classification from interaction sequences. The proposed model incorporates temporal attention to modulate the contribution of different time segments and temporal dropout to regularize learning across temporally correlated observations. The design reflects the assumption that temporal dimensions in affordance data are not semantically uniform and that discriminative information may be unevenly distributed across time. Experimental results on affordance datasets demonstrate that EEG-tGAT achieves improved classification performance compared to GATv2. The observed improvements suggest that explicitly encoding temporal importance and enforcing temporal robustness introduce inductive biases that are better aligned with the structure of affordance-driven interaction data. These findings indicate that modest architectural modifications to graph attention models can yield consistent benefits when temporal relationships play a significant role in the task.
Original Article Export to Word Export to PDF
View Cached Full Text

Cached at: 04/20/26, 08:29 AM

Paper page - A Temporally Augmented Graph Attention Network for Affordance Classification

Source: https://huggingface.co/papers/2604.10149

Abstract

EEG-tGAT enhances Graph Attention Networks by incorporating temporal attention and dropout mechanisms to improve affordance classification from interaction sequences.

Graph attention networks (https://huggingface.co/papers?q=Graph%20attention%20networks) (GATs) provide one of the best frameworks for learning node representations in relational data; but, existing variants such as Graph Attention Network (GAT) mainly operate on static graphs and rely on implicit temporal aggregation when applied to sequential data. In this paper, we introduce Electroencephalography (https://huggingface.co/papers?q=Electroencephalography)-temporal Graph Attention Network (EEG-tGAT), a temporally augmented formulation of GATv2 (https://huggingface.co/papers?q=GATv2) that is tailored for affordance classification (https://huggingface.co/papers?q=affordance%20classification) from interaction sequences (https://huggingface.co/papers?q=interaction%20sequences). The proposed model incorporates temporal attention (https://huggingface.co/papers?q=temporal%20attention) to modulate the contribution of different time segments and temporal dropout (https://huggingface.co/papers?q=temporal%20dropout) to regularize learning across temporally correlated observations. The design reflects the assumption that temporal dimensions in affordance data are not semantically uniform and that discriminative information may be unevenly distributed across time. Experimental results on affordance datasets show that EEG-tGAT achieves improved classification performance compared to GATv2 (https://huggingface.co/papers?q=GATv2). The observed gains help us conclude that explicitly encoding temporal importance and enforcing temporal robustness introduce inductive biases (https://huggingface.co/papers?q=inductive%20biases) that are much better aligned with the structure of affordance-driven interaction data. These findings show that modest architectural changes to graph attention models can help achieve consistent benefits when temporal relationships play a nontrivial role in the task.

View arXiv page (https://arxiv.org/abs/2604.10149) View PDF (https://arxiv.org/pdf/2604.10149) Add to collection (https://huggingface.co/login?next=%2Fpapers%2F2604.10149)

Get this paper in your agent:

hf papers read 2604.10149

Don’t have the latest CLI? curl -LsSf https://hf.co/cli/install.sh | bash

Models citing this paper0

No model linking this paper

Cite arxiv.org/abs/2604.10149 in a model README.md to link it from this page.

Datasets citing this paper0

No dataset linking this paper

Cite arxiv.org/abs/2604.10149 in a dataset README.md to link it from this page.

Spaces citing this paper0

No Space linking this paper

Cite arxiv.org/abs/2604.10149 in a Space README.md to link it from this page.

Collections including this paper0

No Collection including this paper

Add this paper to a collection (https://huggingface.co/new-collection) to link it from this page.

Similar Articles

Target-Oriented Pretraining Data Selection via Neuron-Activated Graph

arXiv cs.CL

This paper introduces Neuron-Activated Graph (NAG) Ranking, a training-free framework for selecting pretraining data aligned with target tasks by identifying and ranking candidate data based on similarity in neuron activation patterns. The approach achieves 4.9% average improvement over random sampling and demonstrates that sparse neuron patterns capture functional capabilities for target learning.

Generative modeling with sparse transformers

OpenAI Blog

OpenAI introduces the Sparse Transformer, a deep neural network that improves the attention mechanism from O(N²) to O(N√N) complexity, enabling modeling of sequences 30x longer than previously possible across text, images, and audio. The model uses sparse attention patterns and checkpoint-based memory optimization to train networks up to 128 layers deep, achieving state-of-the-art performance across multiple domains.