Third-party risk management agent

YouTube AI Channels Tools

Summary

OpenAI unveils Trove, a no-code agent that automates vendor due-diligence by screening for sanctions, financial and reputational risk and producing analyst-ready reports in minutes.

Watch a guided walkthrough of an agent that screens vendors for sanctions, financial, and reputational risk, then turns the findings into a clear report.
Original Article Export to Word Export to PDF
View Cached Full Text

Cached at: 04/22/26, 09:26 PM

TL;DR: OpenAI demonstrates Trove, a no-code agent that screens vendors for sanctions, financial, and reputational risk and produces a human-ready due-diligence report in minutes. ## Project overview: Trove, the third-party risk agent Hojun introduces Trove, an internal-style agent that automates the vendor-due-diligence workflow used by OpenAI’s Finance team. The goal is to cut the manual, repetitive parts of screening while keeping consistency and control. ## Building the agent from a single prompt The build starts with one natural-language prompt that describes the entire workflow, the tools the agent will call, and the skills it must apply. Finance’s existing supplier-risk playbook is uploaded as a “skill” that encodes best-practice rules, metadata, and guardrails. Within seconds ChatGPT drafts an implementation plan and begins live assembly. ## Side-by-side collaboration The left pane stays in free-form chat, letting the builder refine logic in plain language. The right pane renders the agent’s evolving tools, skills, and applications in real time. No engineering code is written; the prompt-and-response loop is the only interface. ## One-click packaging When the builder is satisfied, ChatGPT compiles the conversation into a polished, fine-tuned instruction set. The bundle is immediately deployable without extra technical resources. ## Preview and test The same UI offers a preview mode. The builder launches a test run and watches the agent: - ingest a vendor name - call external data sources for sanctions, PEP, adverse-media, and financial-health signals - apply the Finance team’s risk-scoring skill - coordinate tasks across systems - collect evidence and citations A trace panel shows every tool call, input, and decision point for auditability. ## Output: analyst-ready report Within minutes the agent returns a structured report that maps directly to the human analyst’s checklist. The document is formatted for final review, eliminating the copy-paste and manual search steps that formerly consumed hours. ## Impact Finance analysts move from exhaustive manual lookups to a guided review of a pre-built, evidence-backed dossier, accelerating third-party onboarding while preserving risk rigor. Source: https://www.youtube.com/watch?v=jwkKbedaeYM

Similar Articles

SafetyKit scales risk agents with OpenAI’s most capable models

OpenAI Blog

SafetyKit launches AI agents powered by OpenAI's GPT-5, GPT-4.1, and specialized techniques to detect fraud and prohibited activity across text, images, and financial transactions with 95%+ accuracy. The solution enables marketplaces and fintech platforms to automate risk detection, policy enforcement, and content moderation at scale.

Turning contracts into searchable data at OpenAI

OpenAI Blog

OpenAI shares how it built an internal contract data agent that automates the extraction and structuring of contract data from various document formats while keeping finance experts in control through a human-in-the-loop review process. The system has reduced contract review time by half and enabled the team to process thousands of contracts monthly without proportional headcount expansion.

Software review agent

YouTube AI Channels

OpenAI built an internal Slack agent called Slate that automatically reviews software requests, checks policy, compares tools, and files Jira tickets for IT when license expansion is needed.

Outbound coordinated vulnerability disclosure policy

OpenAI Blog

OpenAI has published its outbound coordinated vulnerability disclosure policy, outlining how it responsibly reports security vulnerabilities discovered in third-party software to vendors and open-source maintainers, including through AI-powered security analysis. The policy covers detection methods, peer review processes, and disclosure procedures under its Security Research team branded 'Aardvark'.