Tag
OpenAI has launched a Bio Bug Bounty program for GPT-5.5, inviting security researchers to identify universal jailbreaks for biological safety challenges. The program offers rewards up to $25,000 for successfully defeating the model's safeguards on specific bio-risk questions.
OpenAI demonstrates GPT-5's capability to accelerate biological research by autonomously optimizing a molecular cloning protocol in collaboration with Red Queen Bio, achieving a 79-fold improvement in cloning efficiency through novel enzymatic mechanisms. The work showcases AI's potential to support experimental iteration and empirical validation in wet lab settings while highlighting biosecurity considerations.
OpenAI researchers study worst-case frontier risks of releasing open-weight LLMs through malicious fine-tuning (MFT) in biology and cybersecurity domains, finding that open-weight models underperform frontier closed-weight models and don't substantially advance harmful capabilities.
OpenAI has launched a bio bug bounty program inviting vetted researchers to find universal jailbreaks in ChatGPT Agent's bio/chem safety challenge, offering up to $25,000 for a successful universal jailbreak across all ten levels. Applications open July 17, 2025, with testing beginning July 29, 2025.
OpenAI publishes a comprehensive approach to managing dual-use risks from advanced AI models in biology, outlining strategies for enabling beneficial scientific discovery while preventing misuse for bioweapons development through expert collaboration, model training, detection systems, and security controls.
OpenAI and Los Alamos National Laboratory announced a research partnership to evaluate how frontier AI models like GPT-4o can safely assist scientists in laboratory settings, with focus on bioscience capabilities and biosecurity risk assessment.
OpenAI conducted a study with 100 participants to evaluate whether GPT-4 meaningfully increases access to dangerous biological threat creation information compared to internet-only baselines, as part of their Preparedness Framework for AI safety. The research introduces an early warning evaluation methodology to detect AI-enabled biorisk uplift and serves as a potential tripwire for flagging models that require further safety testing.