TD3B: Transition-Directed Discrete Diffusion for Allosteric Binder Generation
Summary
TD3B is a sequence-based generative framework for designing allosteric binders with specific agonist or antagonist behaviors using transition-directed discrete diffusion. The paper introduces a method to control directional transitions in protein states, addressing limitations of static structure-based design.
View Cached Full Text
Cached at: 05/12/26, 07:33 AM
Paper page - TD3B: Transition-Directed Discrete Diffusion for Allosteric Binder Generation
Source: https://huggingface.co/papers/2605.09810
Abstract
A sequence-based generative framework called TD3B is introduced for designing allosteric binders with specified agonist or antagonist behavior by controlling directional transitions in protein states.
Protein function is often controlled by ligands that bias the direction of state transitions, such as agonists and antagonists, rather than stabilizing a single conformation. This is especially important for clinically relevant G protein-coupled receptors (GPCRs), where therapeutic efficacy depends on functional directionality. Structure-based design methods optimize binding to static conformations and cannot represent non-reversible, directional effects or systematically distinguish agonist from antagonist behavior. To address this gap, we introduce Transition-DirectedDiscrete DiffusionforAllosteric Binder Design(TD3B), a sequence-basedgenerative frameworkthat designs binders with specified agonist or antagonist behavior via adirectional transition controlobjective. TD3B combines atarget-aware Direction Oracle, asoft binding-affinity gate, andamortized fine-tuningof apre-trained discrete diffusion model, enabling targeted agonist and antagonist generation decoupled from binding affinity and unattainable by equilibrium-based or inference-only guidance baselines. The code and checkpoints are available at https://huggingface.co/ChatterjeeLab/TD3B.
View arXiv pageView PDFAdd to collection
Models citing this paper1
#### ChatterjeeLab/TD3B Updatedabout 4 hours ago
Datasets citing this paper0
No dataset linking this paper
Cite arxiv.org/abs/2605.09810 in a dataset README.md to link it from this page.
Spaces citing this paper0
No Space linking this paper
Cite arxiv.org/abs/2605.09810 in a Space README.md to link it from this page.
Collections including this paper0
No Collection including this paper
Add this paper to acollectionto link it from this page.
Similar Articles
Conditional generation of antibody sequences with classifier-guided germline-absorbing discrete diffusion
This paper introduces a discrete diffusion model with a novel 'germline absorbing' modification to improve conditional antibody sequence generation. It addresses germline bias in protein language models and demonstrates superior performance in optimizing antibody binding affinity and developability compared to existing methods like EvoProtGrad.
From Holo Pockets to Electron Density: GPT-style Drug Design with Density
This paper introduces EDMolGPT, an autoregressive framework that generates 3D molecular conformations from low-resolution electron density point clouds, improving structure-based drug design by leveraging physically meaningful density signals.
Self-Distilled Trajectory-Aware Boltzmann Modeling: Bridging the Training-Inference Discrepancy in Diffusion Language Models
This paper introduces TABOM, a self-distilled trajectory-based post-training framework for Diffusion Language Models that aligns training with inference trajectories using Boltzmann modeling to mitigate the training-inference discrepancy and reduce catastrophic forgetting.
Steering Without Breaking: Mechanistically Informed Interventions for Discrete Diffusion Language Models
This paper introduces a novel adaptive scheduler for steering discrete diffusion language models using sparse autoencoders, demonstrating that targeting interventions based on when specific attributes commit improves control quality and strength over uniform methods.
TMPO: Trajectory Matching Policy Optimization for Diverse and Efficient Diffusion Alignment
This paper introduces Trajectory Matching Policy Optimization (TMPO), a method for aligning diffusion models that addresses reward hacking and visual mode collapse by matching trajectory-level reward distributions rather than maximizing scalar rewards.