How OpenAI is approaching 2024 worldwide elections

OpenAI Blog News

Summary

OpenAI announced its 2024 election safeguards including directing users to authoritative voting information sources, preventing deepfake generation of political figures, and disrupting covert influence operations. The company reported redirecting ~1 million ChatGPT responses to voting resources and rejecting over 250,000 requests to generate images of politicians.

We’re working to prevent abuse, provide transparency on AI-generated content, and improve access to accurate voting information.
Original Article Export to Word Export to PDF
View Cached Full Text

Cached at: 04/20/26, 02:54 PM

# How OpenAI is approaching 2024 worldwide elections Source: [https://openai.com/index/how-openai-is-approaching-2024-worldwide-elections/](https://openai.com/index/how-openai-is-approaching-2024-worldwide-elections/) *Update on November 8, 2024:* - *In the lead\-up to the US elections, we implemented safeguards to direct people to reliable sources of information, prevent deepfakes, and counter efforts by malicious actors\. Here are some early insights into those measures:* ***Elevating authoritative sources of information*** - *Throughout 2024, we’ve worked to elevate reliable sources of election information within ChatGPT\. Through our collaboration with the National Association of Secretaries of State \(NASS\), we directed people asking ChatGPT specific questions about voting in the U\.S\., like where or how to vote, to*[*CanIVote\.org*⁠\(opens in a new window\)](http://canivote.org/)*\. In the month leading up to the election, roughly 1 million ChatGPT responses directed people to*[*CanIVote\.org*⁠\(opens in a new window\)](http://canivote.org/)*\. Similarly, starting on Election Day in the U\.S\., people who asked ChatGPT for election results received responses encouraging them to check news sources like the Associated Press and Reuters\. Around 2 million ChatGPT responses included this message on Election Day and the day following\.* - *In addition to our efforts to direct people to reliable sources of information, we also worked to ensure ChatGPT did not express political preferences or recommend candidates even when asked explicitly\.* ***Preventing deepfakes*** - *We’ve applied safety measures to ChatGPT to refuse requests to generate images of real people, including politicians\. These guardrails are especially important in an elections context and are a key part of our broader efforts to prevent our tools being used for deceptive or harmful purposes\. In the month leading up to Election Day, we estimate that ChatGPT rejected over 250,000 requests to generate DALL·E images of President\-elect Trump, Vice President Harris, Vice President\-elect Vance, President Biden, and Governor Walz\.* ***Disrupting threat actors*** - *A central part of our global elections work in 2024 has been identifying and disrupting attempts to use our tools to generate content used in covert influence operations\. In*[*May*⁠](https://openai.com/index/disrupting-deceptive-uses-of-AI-by-covert-influence-operations/)*, we began publicly sharing information on our disruptions, and published additional reports in*[*August*⁠](https://openai.com/index/disrupting-a-covert-iranian-influence-operation/)*and*[*October*⁠\(opens in a new window\)](https://cdn.openai.com/threat-intelligence-reports/influence-and-cyber-operations-an-update_October-2024.pdf)*\.* - *Our teams continued to monitor our services closely in the lead\-up to Election Day and have not seen evidence of U\.S\. election\-related influence operations attracting viral engagement or building sustained audiences through the use of our models\.* --- *Update on October 31, 2024:* - *As we approach Election Day in the U\.S\., our teams are actively testing the safeguards we’ve put in place over the past year and monitoring for any issues or attempts to evade them\. We will adjust our protective measures as needed, guided by ongoing insights into how people engage with our tools\.* - *Starting on November 5th\., people who ask ChatGPT about election results will see a message encouraging them to check news sources like the*[*Associated Press*⁠\(opens in a new window\)](https://apnews.com/projects/election-results-2024/)*and*[*Reuters*⁠\(opens in a new window\)](https://www.reuters.com/world/us/elections/)*, or their state or local election board for the most complete and up\-to\-date information\.* - *This effort builds on our collaboration with the National Association of Secretaries of State to direct people looking for information about how and where to vote to*[*CanIVote\.org*⁠\(opens in a new window\)](http://canivote.org/)*, the authoritative website on U\.S\. voting information\. ChatGPT will continue to direct people asking these questions to CanIVote\.org through Election Day\.* --- *Update on May 14, 2024:* - *As part of our ongoing work to promote transparency around AI content during this important election year, we recently began providing researchers with early access to a new tool that can help identify images created by OpenAI's DALL·E 3\. We also joined the Steering Committee of C2PA—the Coalition for Content Provenance and Authenticity\. C2PA is a widely used standard for digital content certification, developed and adopted by a wide range of actors including software companies, camera manufacturers, and online platforms\.* - *Building on our efforts to direct people to authoritative sources of information about voting in the U\.S\., we’ve introduced a new experience ahead of the 2024 election for the European Parliament\. ChatGPT now directs users to the European Parliament’s official source of voting information,*[*elections\.europa\.eu*⁠\(opens in a new window\)](https://elections.europa.eu/)*, when asked certain questions about the election process such as where to vote\. This is similar to our collaboration with the National Association of Secretaries of State \(NASS\) for the 2024 US Presidential election\.* - *In addition to the steps we’re taking at OpenAI, we believe there is an important role for governments\. Today we are endorsing the “*[*Protect Elections from Deceptive AI Act*⁠\(opens in a new window\)](https://www.govtrack.us/congress/bills/118/s2770/text/is)*,” a bi\-partisan bill proposed by Senators Klobuchar, Hawley, Coons, Collins, Ricketts, and Bennet in the United States Senate\. The bill would ban the distribution of deceptive AI\-generated audio, images, or video relating to federal candidates in political advertising, while including important exceptions to protect First Amendment rights\. We do not want our technology—or any AI technology—to be used to deceive voters and we believe this legislation represents an important step to addressing this challenge in the context of political advertising\.* --- Protecting the integrity of elections requires collaboration from every corner of the democratic process, and we want to make sure our technology is not used in a way that could undermine this process\. Our tools empower people to improve their daily lives and solve complex problems—from using AI to[enhance state services⁠\(opens in a new window\)](https://www.governor.pa.gov/newsroom/shapiro-administration-and-openai-launch-first-in-the-nation-generative-ai-pilot-for-commonwealth-employees/)to[simplifying medical forms for patients⁠\(opens in a new window\)](https://www.bostonglobe.com/2023/08/23/metro/can-chatgpt-help-with-medical-forms/)\. We want to make sure that our AI systems are built, deployed, and used[safely⁠](https://openai.com/index/our-approach-to-ai-safety/)\. Like any new technology, these tools come with benefits and challenges\. They are also unprecedented, and we will keep evolving our approach as we learn more about how our tools are used\. As we prepare for elections in 2024 across the world’s largest democracies, our approach is to continue our platform safety work by elevating accurate voting information, enforcing measured policies, and improving transparency\. We have a cross\-functional effort dedicated to election work, bringing together expertise from our safety systems, threat intelligence, legal, engineering, and policy teams to quickly investigate and address potential abuse\. The following are key initiatives our teams are investing in to prepare for elections this year:

Similar Articles

An update on disrupting deceptive uses of AI

OpenAI Blog

OpenAI publishes a threat intelligence report detailing efforts to disrupt over 20 deceptive AI operations globally, with a focus on state-linked actors and influence campaigns particularly concerning given global elections.

Introducing OpenAI for Countries

OpenAI Blog

OpenAI announces 'OpenAI for Countries,' a new initiative offering infrastructure partnerships, customized ChatGPT services, and startup funding to help countries build democratic AI capabilities as an alternative to authoritarian AI systems. The program aims to establish 10 country projects in its first phase, coordinated with the US government.

Disrupting deceptive uses of AI by covert influence operations

OpenAI Blog

OpenAI reports disrupting five covert influence operations attempting to misuse its AI models for deceptive campaigns, with findings showing that safety-designed models prevented threat actors from generating desired content. The company is publishing trend analysis and collaborating with industry, civil society, and government to combat AI-enabled information manipulation.

Democratic inputs to AI

OpenAI Blog

OpenAI is launching a grant program to fund teams worldwide in developing proof-of-concept democratic processes for determining the rules and behaviors AI systems should follow, aiming to ensure diverse public input into AI governance. The initiative is funded by the OpenAI non-profit and results will be freely accessible.

Disrupting a covert Iranian influence operation

OpenAI Blog

OpenAI disclosed the disruption of a covert Iranian influence operation (Storm-2035) that used ChatGPT accounts to generate political content targeting the 2024 U.S. election and other topics across social media and fake news websites. The operation achieved minimal audience engagement and was identified through collaboration with Microsoft's threat intelligence.