Multi-module GRPO: Composing Policy Gradients and Prompt Optimization for Language Model Programs
Summary
The paper introduces mmGRPO, a multi-module extension of Group Relative Policy Optimization (GRPO) that improves accuracy in modular AI systems by optimizing language model calls and prompts. It reports an average 11% accuracy improvement across various tasks and provides an open-source implementation in DSPy.
View Cached Full Text
Cached at: 05/08/26, 09:08 AM
Paper page - Multi-module GRPO: Composing Policy Gradients and Prompt Optimization for Language Model Programs
Source: https://huggingface.co/papers/2508.04660 Authors:
,
,
,
,
,
,
,
,
,
,
,
Abstract
mmGRPO, a multi-module extension of GRPO, enhances accuracy in modular AI systems by optimizing LM calls and prompts across various tasks.
Group Relative Policy Optimization (GRPO) has proven to be an effective tool forpost-training language models(LMs). However, AI systems are increasingly expressed as modular programs that mix together multipleLM callswith distinctprompt templatesand other tools, and it is not clear how best to leverageGRPOto improve these systems. We begin to address this challenge by definingmmGRPO, a simplemulti-modulegeneralization ofGRPOthat groupsLM callsby module across rollouts and handles variable-length and interrupted trajectories. We find thatmmGRPO, composed withautomatic prompt optimization, improves accuracy by 11% on average acrossclassification,many-hop search, andprivacy-preserving delegationtasks against the post-trained LM, and by 5% against prompt optimization on its own. We open-sourcemmGRPOin DSPy as thedspy.GRPO optimizer.
View arXiv pageView PDFProject pageGitHub34.3kautoAdd to collection
Get this paper in your agent:
hf papers read 2508\.04660
Don’t have the latest CLI?curl \-LsSf https://hf\.co/cli/install\.sh \| bash
Models citing this paper0
No model linking this paper
Cite arxiv.org/abs/2508.04660 in a model README.md to link it from this page.
Datasets citing this paper0
No dataset linking this paper
Cite arxiv.org/abs/2508.04660 in a dataset README.md to link it from this page.
Spaces citing this paper0
No Space linking this paper
Cite arxiv.org/abs/2508.04660 in a Space README.md to link it from this page.
Collections including this paper0
No Collection including this paper
Add this paper to acollectionto link it from this page.
Similar Articles
UDM-GRPO: Stable and Efficient Group Relative Policy Optimization for Uniform Discrete Diffusion Models
UDM-GRPO introduces a stable RL training framework for uniform discrete diffusion models, boosting GenEval accuracy from 69% to 96% and OCR benchmark accuracy from 8% to 57%.
GroupDPO: Memory efficient Group-wise Direct Preference Optimization
GroupDPO introduces a memory-efficient algorithm for group-wise direct preference optimization that leverages multiple candidate responses per prompt while reducing peak memory usage through decoupled backpropagation. The method demonstrates consistent improvements over standard DPO across offline and online alignment settings.
A^2TGPO: Agentic Turn-Group Policy Optimization with Adaptive Turn-level Clipping
This paper introduces A^2TGPO, a reinforcement learning method for agentic LLMs that uses adaptive turn-level clipping and information gain normalization to improve process credit assignment in multi-turn interactions.
Proximal Policy Optimization
OpenAI introduces Proximal Policy Optimization (PPO), a reinforcement learning algorithm that matches or outperforms state-of-the-art methods while being simpler to implement and tune. PPO uses a novel clipped objective function to constrain policy updates and has since become OpenAI's default RL algorithm.
Balanced Aggregation: Understanding and Fixing Aggregation Bias in GRPO
This paper identifies and addresses aggregation bias in GRPO-style reinforcement learning for LLMs, proposing Balanced Aggregation (BA) which improves training stability and final performance by computing token-level means separately for positive and negative subsets.