Tag
The European Parliamentary Research Service (EPRS) has labeled VPNs 'a loophole that needs closing' in the context of online age-verification laws, raising concerns about children bypassing regional content restrictions. The push has sparked pushback from privacy advocates and VPN providers, highlighting tensions between child safety regulation and digital privacy rights.
A comprehensive analysis of national AI strategies across ten Asian economies, highlighting how Vietnam's standalone AI law contrasts with Japan's promotion-focused approach and China's open-source industrial policy, while South Korea leads in enforcement capacity.
The article discusses President Trump's shift from an 'anything goes' AI policy to considering strict regulation, including pre-deployment government reviews for high-risk frontier AI models, citing cybersecurity and national security concerns.
Australia's cybersecurity regulator is urging urgent action to address threats from a hacking group or malware known as Mythos.
A University of Cambridge study published in the Journal of Behavioral Addictions reveals that gambling ads on social media reach young men at more than twice the rate of women, even when not directly targeted.
Canadian federal and provincial privacy watchdogs have determined that OpenAI violated privacy laws by scraping vast amounts of personal data to train ChatGPT without proper consent.
Changpeng Zhao (CZ) states he has been avoiding the U.S. recently but plans to return to engage with the community and address misconceptions following changes in crypto policy.
FSFE report shows Apple has denied all 56 interoperability requests under the DMA, contradicting its own documentation and locking out third-party developers from key iOS/iPadOS features.
Elon Musk claims a majority of Commercial Driver's Licenses (CDLs) were issued illegally in New York.
Kyle Kingsbury discusses emerging roles of human accountability in ML systems, including content moderators, legal representatives, and compliance officers who may bear responsibility for AI system failures.
OpenAI sends a letter to California Governor Gavin Newsom advocating for harmonized national AI regulation standards instead of a patchwork of state-by-state rules, arguing this approach would better support innovation and competitiveness while maintaining safety.
OpenAI and the UK Government announced a strategic partnership to accelerate AI adoption across public and private sectors, with a Memorandum of Understanding signed by Sam Altman and UK Technology Secretary Peter Kyle. The partnership includes collaboration on AI deployment, infrastructure development, and technical information sharing, with OpenAI committing to expand its UK presence.
OpenAI presents an EU Economic Blueprint proposing four pillars to drive AI-fueled growth in Europe: establishing foundational resources (chips, data, energy, talent), streamlining regulatory frameworks, maximizing AI adoption across sectors, and ensuring responsible development aligned with European values. The blueprint includes concrete initiatives like a 300% computing capacity increase by 2030, a €1 billion AI Accelerator Fund, and training 100 million Europeans in AI skills.
OpenAI proposes a comprehensive U.S. AI policy framework to the federal government, focusing on innovation freedom, strategic export controls, copyright reform, infrastructure investment, and government AI adoption to maintain American competitiveness against the PRC.
OpenAI submitted a comment to the NTIA outlining their historical approach to model weight distribution, from GPT-2's staged release to GPT-3's API-first strategy, while discussing the trade-offs between open-source model releases and controlled deployment through commercial products.
OpenAI submitted a response to NIST's request for information under the Executive Order on AI, outlining its approaches to evaluating AI capabilities, red teaming, and synthetic media provenance, including findings from GPT-4 biosecurity risk evaluations.
OpenAI submits formal comments to the NTIA on AI accountability policy, outlining their approach to responsible development of foundation models and supporting both horizontal and vertical accountability frameworks across the AI ecosystem.
OpenAI outlines a framework for superintelligence governance emphasizing three key pillars: coordination among leading AI development efforts, an international authority (akin to the IAEA) to oversee systems above certain capability thresholds, and technical progress on AI safety with democratic public oversight of the most powerful systems.