MedConclusion: A Benchmark for Biomedical Conclusion Generation from Structured Abstracts
Summary
MedConclusion introduces a large-scale benchmark of 5.7 million PubMed structured abstracts for evaluating LLMs on biomedical conclusion generation from structured scientific evidence. The study finds that conclusion writing is behaviorally distinct from summarization and that current automatic metrics cluster strong models closely together.
View Cached Full Text
Cached at: 04/21/26, 07:20 AM
Paper page - MedConclusion: A Benchmark for Biomedical Conclusion Generation from Structured Abstracts
Source: https://huggingface.co/papers/2604.06505
Abstract
A large-scale dataset of 5.7 million PubMed structured abstracts is introduced for biomedical conclusion generation, enabling evaluation of large language models’ ability to reason from structured scientific evidence.
Large language models(LLMs) are widely explored for reasoning-intensive research tasks, yet resources for testing whether they can infer scientific conclusions from structured biomedical evidence remain limited. We introduce MedConclusion, a large-scale dataset of 5.7M PubMedstructured abstractsforbiomedical conclusion generation. Each instance pairs the non-conclusion sections of an abstract with the original author-written conclusion, providing naturally occurring supervision forevidence-to-conclusion reasoning. MedConclusion also includes journal-level metadata such as biomedical category and SJR, enabling subgroup analysis across biomedical domains. As an initial study, we evaluate diverse LLMs under conclusion and summary prompting settings and score outputs with bothreference-based metricsandLLM-as-a-judge. We find that conclusion writing is behaviorally distinct from summary writing, strong models remain closely clustered under current automatic metrics, and judge identity can substantially shift absolute scores. MedConclusion provides a reusable data resource for studying scientificevidence-to-conclusion reasoning. Our code and data are available at: https://github.com/Harvard-AI-and-Robotics-Lab/MedConclusion.
View arXiv pageView PDFGitHub1Add to collection
Get this paper in your agent:
hf papers read 2604\.06505
Don’t have the latest CLI?curl \-LsSf https://hf\.co/cli/install\.sh \| bash
Models citing this paper0
No model linking this paper
Cite arxiv.org/abs/2604.06505 in a model README.md to link it from this page.
Datasets citing this paper2
#### harvardairobotics/MedConclusion-Compact Viewer• Updated1 day ago • 140k • 1 #### harvardairobotics/MedConclusion Viewer• Updated1 day ago • 5.83M • 2
Spaces citing this paper0
No Space linking this paper
Cite arxiv.org/abs/2604.06505 in a Space README.md to link it from this page.
Collections including this paper0
No Collection including this paper
Add this paper to acollectionto link it from this page.
Similar Articles
MEDSYN: Benchmarking Multi-Evidence Synthesis in Complex Clinical Cases for Multimodal Large Language Models
MEDSYN is a multilingual multimodal benchmark for evaluating MLLMs on complex clinical cases with up to 7 distinct visual evidence types per case. The study reveals that while frontier models match human experts on differential diagnosis generation, all MLLMs show significant gaps in final diagnosis selection due to poor synthesis of heterogeneous clinical evidence.
Experiments or Outcomes? Probing Scientific Feasibility in Large Language Models
UMBC researchers show LLMs judge scientific claim feasibility better when given outcome data than experiment descriptions, and that incomplete experimental context can hurt accuracy.
Injecting Structured Biomedical Knowledge into Language Models: Continual Pretraining vs. GraphRAG
This paper compares two strategies for injecting structured biomedical knowledge from the UMLS Metathesaurus into language models: continual pretraining (embedding knowledge into model parameters) and GraphRAG (querying a knowledge graph at inference time). Results show improvements on biomedical QA benchmarks, with GraphRAG on LLaMA 3-8B yielding over 3 and 5 accuracy points on PubMedQA and BioASQ respectively without any retraining.
Generating Query-Focused Summarization Datasets from Query-Free Summarization Datasets
This paper proposes an evidence-based model to automatically generate query keywords from query-free summarization datasets, enabling the creation of query-focused summarization datasets. Experimental results show that summaries generated using evidence-based queries achieve competitive ROUGE scores compared to original queries.
Surrogate modeling for interpreting black-box LLMs in medical predictions
Researchers propose a surrogate modeling framework to quantify and interpret latent medical knowledge encoded in black-box LLMs, revealing both valid associations and persistent racial biases.