responsible-ai

Tag

Cards List
#responsible-ai

Expanding on how Voice Engine works and our safety research

OpenAI Blog · 2024-06-07 Cached

OpenAI details the development history and safety approach for Voice Engine, from internal testing in 2022 through various limited deployments including ChatGPT Voice Mode and TTS API, emphasizing careful rollout with professional voice actors and ongoing collaboration with policymakers to address synthetic voice risks.

0 favorites 0 likes
#responsible-ai

OpenAI’s commitment to child safety: adopting safety by design principles

OpenAI Blog · 2024-04-23 Cached

OpenAI and major tech companies including Amazon, Google, Meta, and Microsoft have committed to implementing Safety by Design principles for child protection in generative AI development, deployment, and maintenance. The initiative aims to mitigate risks of child sexual abuse material generation and spread through comprehensive measures across model development, release, and ongoing platform safety.

0 favorites 0 likes
#responsible-ai

Navigating the challenges and opportunities of synthetic voices

OpenAI Blog · 2024-03-29 Cached

OpenAI discusses the challenges and opportunities of its Voice Engine technology, emphasizing safety measures, usage policies, and the need for societal resilience against synthetic voice risks. The company is previewing but not widely releasing the technology, while advocating for voice authentication reforms and public education on AI capabilities.

0 favorites 0 likes
#responsible-ai

Practices for Governing Agentic AI Systems

OpenAI Blog · 2023-12-14 Cached

OpenAI publishes a white paper on governing agentic AI systems, proposing definitions, lifecycle responsibilities, and baseline safety practices for autonomous AI agents. The paper addresses risks and indirect impacts of widespread agentic AI adoption while launching a research grant program.

0 favorites 0 likes
#responsible-ai

Frontier Model Forum

OpenAI Blog · 2023-07-26 Cached

OpenAI, Google, Microsoft, and Anthropic launch the Frontier Model Forum to coordinate on AI safety standards, research, and information sharing among industry, government, and civil society. The initiative focuses on identifying best practices, advancing AI safety research, and establishing secure mechanisms for sharing safety-related information.

0 favorites 0 likes
#responsible-ai

Questions for the Record

OpenAI Blog · 2023-06-22 Cached

Sam Altman responds to Senate questions on AI regulation, advocating for balanced legislation, voluntary safety commitments, and registration/licensing requirements for highly capable foundation models. OpenAI details its safety evaluation approaches and System Card methodology for assessing dangerous capabilities in models like GPT-4.

0 favorites 0 likes
#responsible-ai

Our approach to AI safety

OpenAI Blog · 2023-04-05 Cached

OpenAI outlines its comprehensive approach to AI safety, emphasizing rigorous testing, iterative deployment, real-world monitoring, and regulatory engagement to ensure powerful AI systems are built and used safely.

0 favorites 0 likes
#responsible-ai

Planning for AGI and beyond

OpenAI Blog · 2023-02-24 Cached

OpenAI outlines its strategy for preparing for AGI, emphasizing gradual deployment with real-world feedback loops, increasing caution as systems approach AGI capabilities, and development of better alignment techniques to ensure AI systems remain steerable and safe.

0 favorites 0 likes
#responsible-ai

Best practices for deploying language models

OpenAI Blog · 2022-06-02 Cached

Cohere, OpenAI, and AI21 Labs have jointly published preliminary best practices for developing and deploying large language models, covering usage guidelines, safety measures, bias mitigation, documentation, diverse teams, and ethical labor standards.

0 favorites 0 likes
#responsible-ai

Lessons learned on language model safety and misuse

OpenAI Blog · 2022-03-03 Cached

OpenAI shares lessons learned on language model safety and misuse, discussing challenges in measuring risks, the limitations of existing benchmarks, and their development of new evaluation metrics for toxicity and policy violations. The post also highlights concerns about labor market impacts and the need for continued research on measuring social effects of AI deployment at scale.

0 favorites 0 likes
#responsible-ai

GPT-2: 6-month follow-up

OpenAI Blog · 2019-08-20 Cached

OpenAI discusses their 6-month follow-up to GPT-2 release, outlining plans to release the 1558M parameter model in a few months and emphasizing staged release and partnership-based sharing as key to responsible AI publication.

0 favorites 0 likes
#responsible-ai

Why responsible AI development needs cooperation on safety

OpenAI Blog · 2019-07-10 Cached

OpenAI publishes a policy research paper identifying four strategies to improve industry cooperation on AI safety norms: communicating risks/benefits, technical collaboration, increased transparency, and incentivizing standards. The analysis addresses how competitive pressures could lead to under-investment in safety and proposes mechanisms to align incentives toward safe AI development.

0 favorites 0 likes
#responsible-ai

AI on Campus

YouTube AI Channels · 5d ago Cached

Four top university students discuss the current state of AI on campus, highlighting usage challenges, the 'gray area' of regulations, and how AI empowers non-technical students to build projects. The article emphasizes that responsible AI usage depends on student intent, distinguishing between using AI as a shortcut versus a tool for deep learning.

0 favorites 0 likes
#responsible-ai

@TheFP: Anthropic says Mythos is so powerful that the company is slowing its release. We asked Jared Kaplan why.

X AI KOLs Following · 2026-04-20 Cached

Anthropic announced Claude Mythos, a new AI model with elite-level cybersecurity capabilities including the ability to identify and exploit software vulnerabilities. The company is limiting its release to 40 corporations through Project Glasswing to allow preparation of countermeasures before wider deployment.

0 favorites 0 likes
← Previous
← Back to home

Submit Feedback