Big AI's Regulatory Capture: Mapping Industry Interference and Government Complicity
Summary
This academic paper develops a taxonomy of 27 mechanisms illustrating how major AI corporations capture regulatory processes and influence government policy, backed by an analysis of 100 news articles. The authors warn of the systemic risks of this industry-government collusion and propose strategies to resist corporate dominance in AI governance.
View Cached Full Text
Cached at: 05/11/26, 12:33 PM
# Big AI’s Regulatory Capture: Mapping Industry Interference and Government Complicity Source: [https://arxiv.org/html/2605.06806](https://arxiv.org/html/2605.06806) Abeba Birhane[0000\-0001\-6319\-7937](https://orcid.org/0000-0001-6319-7937)AI Accountability Lab \(AIAL\), School of Computer Science and Statistics \(SCSS\), Trinity College DublinDublinIreland[aial@tcd\.ie](https://arxiv.org/html/2605.06806v1/mailto:[email protected])Riccardo Angius[0000\-0003\-0291\-3332](https://orcid.org/0000-0003-0291-3332)AIAL, SCSS, Trinity College DublinDublinIreland,William Agnew[0000\-0002\-1362\-554X](https://orcid.org/0000-0002-1362-554X)Human Computer Interaction Institute, Carnegie Mellon UniversityPittsburghUSA,Harshvardhan J\. Pandit[0000\-0002\-5068\-3714](https://orcid.org/0000-0002-5068-3714)AIAL, SCSS, Trinity College DublinDublinIreland[me@harshp\.com](https://arxiv.org/html/2605.06806v1/mailto:[email protected]),Bhaskar Mitra[0000\-0002\-5270\-5550](https://orcid.org/0000-0002-5270-5550)Independent ResearcherTiohtià:ke / MontréalCanada[bhaskar\.mitra@acm\.org](https://arxiv.org/html/2605.06806v1/mailto:[email protected]),Roel Dobbe[0000\-0003\-4633\-7023](https://orcid.org/0000-0003-4633-7023)Faculty of Technology, Policy and Management, Delft University of TechnologyDelftNetherlands[r\.i\.j\.dobbe@tudelft\.nl](https://arxiv.org/html/2605.06806v1/mailto:[email protected])andZeerak Talat[0000\-0001\-5503\-867X](https://orcid.org/0000-0001-5503-867X)School of Informatics, University of EdinburghEdinburghScotland[z@zeerak\.org](https://arxiv.org/html/2605.06806v1/mailto:[email protected]) \(2026\) ###### Abstract\. Over the past decade, the AI industry has come to exert an unprecedented economic, political and societal power and influence\. The well\-functioning of regulatory and oversight structures and processes that govern the industry thus have paramount ramifications for everything from fostering public trust in systems marketed as AI, the credibility of scientific knowledge, educational and healthcare services and products, information ecosystems, the environment, rule of law and integrity of democratic process\. It it therefore critical that we comprehend the extent and depth of pervasive and multifaceted capture of AI regulation by corporate actors in order to contend and challenge it\. In this paper, we first develop a taxonomy of mechanisms enabling capture to provide a comprehensive understanding of the problem\. Grounded in design science research \(DSR\) methodologies and extensive scoping review of existing literature and media reports, our taxonomy of capture consists of 27 mechanisms across five categories\. We then develop an annotation template incorporating our taxonomy, and manually annotate and analyse 100 news articles\. The purpose behind this analysis is twofold: validate our taxonomy and provide a novel quantification of capture mechanisms and dominant narratives\. Our analysis identifies 249 instances of capture mechanisms, often co\-occurring with narratives that rationalise such capture\. We find that the most recurring categories of mechanisms are*Discourse & Epistemic Influence*, concerning narrative framing, and*Elusion of law*, related to violations and contentious interpretations of antitrust, privacy, copyright and labour laws\. We further find thatRegulation stifles innovation,Red tapeandNational Interestare the most frequently invoked narratives used to rationalise capture\. We emphasize the extent and breadth of regulatory capture by coalescing forces — Big AI and governments — as something policy makers and the public ought to treat as an emergency\. Finally, we put forward key lessons learned from other industries along with transferable tactics for uncovering, resisting and challenging Big AI capture as well as in envisioning counter narratives\. ††booktitle:\\conffull\(\\confshort\),\\confdate,\\confloc††copyright:acmlicensed††journalyear:2018††doi:XXXXXXX\.XXXXXXX††journal:POMACS††journalvolume:37††journalnumber:4††article:111††publicationmonth:8††journalyear:2026††copyright:cc††conference:The 2026 ACM Conference on Fairness, Accountability, and Transparency; June 25–28, 2026; Montreal, QC, Canada††booktitle:The 2026 ACM Conference on Fairness, Accountability, and Transparency \(FAccT ’26\), June 25–28, 2026, Montreal, QC, Canada††doi:10\.1145/3805689\.3806740††isbn:979\-8\-4007\-2596\-8/2026/06## 1\.Introduction The past decade has seen a rapid growth in the development and integration of AI technologies altering virtually all societal infrastructures – such as information ecosystems \(e\.g\., search and archives\), education, finance, healthcare, and law enforcement – affecting millions of people worldwide\(De Liban,[2024](https://arxiv.org/html/2605.06806#bib.bib103); Institute,[2025](https://arxiv.org/html/2605.06806#bib.bib104)\)\. This has raised questions around intellectual property\(Pasquale and Sun,[2024](https://arxiv.org/html/2605.06806#bib.bib105); Quintais,[2025](https://arxiv.org/html/2605.06806#bib.bib106)\), democratic processes such as elections\(Olanipekun,[2025](https://arxiv.org/html/2605.06806#bib.bib107); Nie,[2024](https://arxiv.org/html/2605.06806#bib.bib108)\), consumer protection\(EU,[2018](https://arxiv.org/html/2605.06806#bib.bib109); Shead,[2021](https://arxiv.org/html/2605.06806#bib.bib110); Panditet al\.,[2026](https://arxiv.org/html/2605.06806#bib.bib145)\), algorithmic bias and discrimination\(Solaimanet al\.,[2023](https://arxiv.org/html/2605.06806#bib.bib111); Dobbe,[2022](https://arxiv.org/html/2605.06806#bib.bib112)\), privacy\(Kalluriet al\.,[2025](https://arxiv.org/html/2605.06806#bib.bib113); Feldstein,[2019](https://arxiv.org/html/2605.06806#bib.bib114)\), and misinformation\(Singh,[2025](https://arxiv.org/html/2605.06806#bib.bib115); Hilalet al\.,[2024](https://arxiv.org/html/2605.06806#bib.bib116)\)\. These concerns have in turn led to numerous global, multilateral, and national governance and regulatory efforts to preserve fundamental rights and to reduce the harms and risks of AI systems\. For example, the EU AI Act states its purpose is to “promote the uptake of human\-centric and trustworthy artificial intelligence \(AI\), while ensuring a high level of protection of health, safety, fundamental rights \[…\] including democracy, the rule of law and environmental protection, against the harmful effects of AI systems”\(EU,[2023](https://arxiv.org/html/2605.06806#bib.bib45)\)\. These aims reflect the expressed wishes of the public, which tend to be at odds with the controversial practices of Big AI\.111We use the term ‘Big AI’ to refer to the handful of companies that develop and mass deploy large\-scale AI technologies – such as large pre\-trained models – built on massive datasets collected through vast, centralised infrastructures, which, with increased integration into societal infrastructure, continue to exert outsized epistemic, economic, political, and societal influence\. The term ‘Big AI’ also encapsulates the structural consolidation of AI technologies by Big Tech as their core value proposition, central to their infrastructure, resources, and strategic investments\(Van Der Vlistet al\.,[2024](https://arxiv.org/html/2605.06806#bib.bib146)\)\. While new entities such as OpenAI, Anthropic, DeepSeek and xAI are included in our definition due to their significant geo\-political influence, ‘Big AI’ also marks the shift of existing Big Tech companies – Alphabet, Meta, Amazon, Microsoft, Apple, NVIDIA – towards becoming “AI\-first” companies, further expanding their unprecedented power and influence across all economic sectors and aspects of public life\.Recent public surveys have found that there is broad support for regulation of AI\. For example, an analysis of 2000 responses on AI deployment found concerns related to violation of privacy and civil rights were among the most pressing, with the majority \(68%\) of respondents highlighting these risks\(Robles and Mallinson,[2025](https://arxiv.org/html/2605.06806#bib.bib118)\)\. Similarly, Pew Research found that 60% of the US population would be uncomfortable with the use of AI in their health care services\(Tysonet al\.,[2023](https://arxiv.org/html/2605.06806#bib.bib53)\), while 72% of respondents of a UK survey expressed a need for stronger regulation in order to feel comfortable with AI technologies\(Institute and Institute,[2025](https://arxiv.org/html/2605.06806#bib.bib52)\)\. Despite the broad public support for protecting public interests through regulation of the AI industry, there are growing concerns about how the tech industry’s outsized influence on policy\-making may obstruct meaningful safeguards and public priorities\. Although the European Union has relied on civil society participation to shape digital regulation\(European Center for Not\-for\-Profit Law \(ECNL\) and European AI & Society Fund,[2024](https://arxiv.org/html/2605.06806#bib.bib28)\), numerous investigations show an outsized influence from the tech industry over the development of regulatory standards\(Corporate Europe Observatory,[2025](https://arxiv.org/html/2605.06806#bib.bib25)\)\. For example, reports have shown how the European Commission has uncritically adopted the industry’s call to “simplify” the AI Act \(alongside other digital regulation\) even before it has beenfully implemented\(European Commission,[2025](https://arxiv.org/html/2605.06806#bib.bib19)\)\. These concerns are also felt in the US, where a coalition of tech, consumer protection, labour, economic and environmental justice, and civil society organizations launched a “People’s AI Action Plan”as a counter\-weight to the government’s industry\-backed AI executive orders and agenda: “The White House AI Action Plan is written by Big Tech interests invested in advancing AI that’s used on us, not by us\. Today, we are reclaiming agency over the trajectory AI will take: it’s time for a People’s Action Plan for AI that puts the needs of everyday Americans over corporate profits”\(AI Now Institute,[2025](https://arxiv.org/html/2605.06806#bib.bib26)\)\. In contemporary societies, regulatory oversight functions as a central mechanism for ensuring public health, safety, and environmental protection\(Shleifer,[2005](https://arxiv.org/html/2605.06806#bib.bib44)\)\. Regulation governs the breadth of society: from means of transport to the buildings we live in, the appliances used in our kitchens, the food available on the market, and educational infrastructure; it also defines mechanisms for health and safety inspections and protections\. Protective and social regulation, in particular, aim to “make our lives safer by eliminating or reducing risks or exposure to risks”\(Levi\-Faur,[2011](https://arxiv.org/html/2605.06806#bib.bib43)\)\. Regulatory and enforcement mechanisms have often emerged along with other practices—such as scientific codes of conduct addressing conflicts of interest—in the aftermath of tragedies that resulted in harms, injury, and death\. For example, the Thalidomide tragedy in the 1960s resulted in birth defects and deaths of thousands due to insufficient standards for testing drugs prior to their release to the public\(Moro and Invernizzi,[2017](https://arxiv.org/html/2605.06806#bib.bib34)\)\. Similarly, the Pfizer trial of the experimental antibiotic trovafloxacin \(Trovan\) for cerebrospinal meningitis in Nigeria resulted in at least 11 deaths and severe side effects in several children, including severe liver damage and kidney failure\(Loewenberg,[2008](https://arxiv.org/html/2605.06806#bib.bib38); Carr,[2003](https://arxiv.org/html/2605.06806#bib.bib39)\)\. Subsequently, both Thalidomide and Trovan were withdrawn from the market and reformed drug safety governance with the Thalidomide tragedy now considered a watershed moment for pharmaceutical regulation\(Moro and Invernizzi,[2017](https://arxiv.org/html/2605.06806#bib.bib34)\)\. Such incidents highlight how profit motives, conflicts of interest, and corporate influence may overshadow the public interest in scientific research — as well as in the development, marketing, and governance of consumer products — and result in tragedy\. Understanding corporate influence on AI regulation requires examining research and reporting on the actions and positions of the tech industry, particularly, Big AI, as well as those of policy\-makers and other relevant actors\. A growing body of evidence shows howthe AI industryhas been attempting to undermine and resist regulation, oversight, and enforcement\(Observatory,[2023](https://arxiv.org/html/2605.06806#bib.bib120)\), including through large scale lobbying\(Observatory,[2024](https://arxiv.org/html/2605.06806#bib.bib122); Amin,[2025](https://arxiv.org/html/2605.06806#bib.bib123)\); retaliating against whistleblowers\(Post,[2025](https://arxiv.org/html/2605.06806#bib.bib125)\), civil society groups, researchers and law\-makers\(Reuters,[2015](https://arxiv.org/html/2605.06806#bib.bib126); BBC,[2025](https://arxiv.org/html/2605.06806#bib.bib127)\); the revolving door—e\.g\. the former French Secretary of State for Digital Transition, Cédric O, becoming a shareholder and advisor for Mistral\(Observatory,[2024](https://arxiv.org/html/2605.06806#bib.bib122)\); and political donations\(One,[2025](https://arxiv.org/html/2605.06806#bib.bib128)\)\. For instance, Amazon, Google, Meta, and Apple each contributed $1 million dollars to Donald Trumps political campaign, while Elon Musk contributed $250 million to pro\-Trump groups prior to the 2024 US elections\. Governments and regulatory authorities can also play a key role in undermining and minimising the effects of existing or emerging rules\. For example, the EU Commission President Von der Leyen has called forderegulation\(Commission,[2025](https://arxiv.org/html/2605.06806#bib.bib129)\); the General Purpose AI Code of Practice under the AI Act — drafted with industry participation — was diluted at each stage until protections for human rights were made optional\(Cabreraet al\.,[2025](https://arxiv.org/html/2605.06806#bib.bib140)\); and the UK AI Bill has been delayed; while German authorities have discussed withdrawing laws to maintain “attractive\[ness\] to tech companies”\(Magazine,[2025](https://arxiv.org/html/2605.06806#bib.bib139)\)and to ensure “innovation\-friendly” implementation of AI regulation\(FragDenStaat,[2025](https://arxiv.org/html/2605.06806#bib.bib130)\)\. A similar trend is evident in the US: the current administration has halted enforcement and regulatory oversight into Meta’s alleged improper use of user financial data obtained from third parties for advertising purposes\(Citizen,[2025b](https://arxiv.org/html/2605.06806#bib.bib131)\); halted, dropped, or withdrawn enforcement of 100s of cases of alleged corporate wrongdoing\(Citizen,[2025a](https://arxiv.org/html/2605.06806#bib.bib133)\); and signed an excursive order banning state\-level AI regulations\(Robins\-Early and Kerr,[2025](https://arxiv.org/html/2605.06806#bib.bib141)\)\. Beyond the converging actions between regulators and the tech industry, coordinated efforts that actively campaign to portray the AI industry positively while framing regulatory oversight as undesirable put meaningful governance and regulatory oversight at risk\.For example, the new super PAC formed by Andreessen Horowitz, Greg Brockman, and Mark Zuckerberg has pledged tens of millions of dollars to promote “industry\-friendly policies” and cast the AI industry in a favourable light\(Bellan,[2025](https://arxiv.org/html/2605.06806#bib.bib138); Tan,[2025](https://arxiv.org/html/2605.06806#bib.bib136)\)\. Indeed, several public campaigns are already underway\. A recent open letter signed by 100s of CEOs presents a simplistic dichotomy of “innovation or regulation”\(Letter,[2024](https://arxiv.org/html/2605.06806#bib.bib4)\), alongside media outreach, targeted advertising, and funding of public initiatives, such as Meta’s multimillion\-dollar ad campaign depicting data centres as a boon to agricultural towns in Iowa and New Mexico\(Miller,[2025](https://arxiv.org/html/2605.06806#bib.bib137)\)\. The coalescence of corporate and state interests threatens the rule of law, independent regulatory oversight, and meaningful accountability — all of which are crucial for the responsible and publicly supported technological innovation\. It is therefore important to shed light on the various forms of corporate capture, the mechanisms involved, and the narratives used to legitimise it so that these practices and responsible actors can be better understood and challenged\. We interpret this as a reflexive knowledge problem; the growing role of capture in AI regulation and governance — and its societal consequences — is a relevant but understudied topic in its own right and a factor that reflexively shapes the scholarship undertaken in FAccT and adjacent communities, e\.g\., by influencing narratives, research funding, infrastructure and priorities\(Whittaker,[2021](https://arxiv.org/html/2605.06806#bib.bib49); Abdalla and Abdalla,[2021](https://arxiv.org/html/2605.06806#bib.bib54); Younget al\.,[2022](https://arxiv.org/html/2605.06806#bib.bib96)\)\. In this paper, we study mechanisms that enable capture of AI regulation, oversight, and public discourse\. Based on a comprehensive study of grey and academic literature across different disciplines and societal actors working to uncover corporate capture \(see Section[2](https://arxiv.org/html/2605.06806#S2)\), we identified \(1\) that corporate capture is a multi\-faceted topic with few general*conceptual frameworks*available, \(2\) there is little academic research on corporate capture by Big AI specifically, and \(3\) existing work on corporate capture of AI regulation is mostly done by advocacy groups and civil society organizations, but this has not been brought together to provide aholistic picture of the phenomenon\. We address this gap by mapping the types of mechanisms that enable capture and associated narratives as this is an important step towards comprehensively understanding, identifying, documenting, and challenging capture as well as charting alternative futures\. We approached this knowledge problem by drawing on Design Science Research \(DSR\), and develop a taxonomy that serves as a*conceptual model*for mapping capture mechanisms and narratives\. We draw on desk research, expert exchange, literature review, and iterative design cycles in which we apply and evaluate the taxonomy and refine its comprehensiveness and clarity\. Based on our taxonomy, we then develop an annotation schema and manually annotate two datasets:DS1, containing 25 articles andDS2, containing 75 Reuters news articles\. The purpose of this analysis is twofold: the analysis ofDS1validates our taxonomy, while analysis ofDS2provides a novel quantification of capture mechanisms and narratives as well as insights into their prevalence and characteristics\. We emphasize here that our study is primarilydescriptive and interpretiveand does not seek to quantify the extent to which capture occurs or mechanisms and narratives that cause it\. We find that the landscape of capture is multi\-faceted\. Tech companies employ a wide variety of tactics: from disregarding or misinterpreting existing laws; claiming regulation stifles innovationand unfairly prevents people from accessing AI products; relying on speculative studies; lobbying; to private meetings with regulators to ensure that regulation remains favourable and oversight negligent\. Addressing the current challenge of capture in the AI landscape requires drawing inspiration from past efforts against regulatory capture\. We therefore draw on past efforts from other sectors to counter regulatory capture as a guide for protecting public values that are vital to the present and future of our academic fields, public institutions, regulatory bodies, and shared public life\. ## 2\.Related work: the multifaceted nature of capture Despite the rapidly expanding power, influence, and impacts on broad swathes of society, corporate capture in the AI domain remains poorly defined and understudied\. Existing academic scholarship has examined forms of capture in other sectors, such as the tobacco, oil, pharmaceutical, and information technology industries; however, a systematic examination and organisation of such work within the AI industry remains limited\. This section addresses this gap by surveying disparate strands of relevant work from civil society organisations and academic literature\. In accordance withShapiro \([2012](https://arxiv.org/html/2605.06806#bib.bib155)\), we organise prior work into two functionally different forms of capture:Epistemic captureandRegulatory capture\.Epistemic capturecan further be distinguished intoAcademic capture— concerning the industry’s influence over scientific knowledge production through, e\.g\., funding, sponsorship, and institutional partnerships that shape academic research intersecting with AI, including but not limited to computer engineering, ethics, and safety — andMedia capture— pertaining to efforts to sway media, public discourse, debates, and conceptions of AI through strategies such as narrative framing, deception, disinformation, and doublespeak\. By structuring the literature thus, we highlightthe fragmented nature of existing work and the converging themes that emerge across it\. ### 2\.1\.Epistemic Capture #### 2\.1\.1\.Academic Capture Corporate capture extends to scientific knowledge production\(Krimsky,[1985](https://arxiv.org/html/2605.06806#bib.bib74); Hiltneret al\.,[2024](https://arxiv.org/html/2605.06806#bib.bib67); Muttitt,[2004](https://arxiv.org/html/2605.06806#bib.bib68); Noor,[2024](https://arxiv.org/html/2605.06806#bib.bib69); Abdalla and Abdalla,[2021](https://arxiv.org/html/2605.06806#bib.bib54)\)which may intensify industry influence on regulation by legitimizing industry narratives and discredit narratives unfavourable to corporate interests\. Industry funding, for example, deliberately shapes scientific conclusions to favour industry and undermine public welfare\(Orr,[2010](https://arxiv.org/html/2605.06806#bib.bib77); Babaet al\.,[2005](https://arxiv.org/html/2605.06806#bib.bib75); Cranor,[2008](https://arxiv.org/html/2605.06806#bib.bib76); Krimsky,[2004](https://arxiv.org/html/2605.06806#bib.bib78); Sasset al\.,[2005](https://arxiv.org/html/2605.06806#bib.bib79); Michaels,[2008](https://arxiv.org/html/2605.06806#bib.bib80); Abdalla and Abdalla,[2021](https://arxiv.org/html/2605.06806#bib.bib54)\)\.Lachapelleet al\.\([2024](https://arxiv.org/html/2605.06806#bib.bib81)\)identify three contributing factors for academic capture:\(i\)increasing financialisation of higher education,\(ii\)growing industry influence, and\(iii\)reticence of university employees to challenge the status quo\.Hiltneret al\.\([2024](https://arxiv.org/html/2605.06806#bib.bib67)\)identify several types of industry ties to higher education in the fossil fuel research community, including: serving on academic boards, sponsorship and endowments, student recruitment, advising on courses, and leasing of University lands\. The AI industry has been particularly successful in establishing their dominating influence over AI research\(Whittaker,[2021](https://arxiv.org/html/2605.06806#bib.bib49); Ahmedet al\.,[2023](https://arxiv.org/html/2605.06806#bib.bib42)\), paralleling the U\.S\. military’s dominance over scientific research during the Cold War\(Whittaker,[2021](https://arxiv.org/html/2605.06806#bib.bib49)\)\. The concentration of control over data, compute resources, expertise, and funding has erected barriers to conducting critical AI research without industry involvement\(Murgia,[2019](https://arxiv.org/html/2605.06806#bib.bib95)\)or the AI industry’s premise\.Whittaker \([2021](https://arxiv.org/html/2605.06806#bib.bib49)\)adds, “\[t\]hese companies control the tooling, development environments, languages, and software that define the AI research process — they make the water in which AI research swims”\. The AI industry has created myriads of schemes to draw academia closer to the companies, including supporting dual\-affiliation arrangements that allow scholars to draw high salaries from Big Tech while publishing their research under academic affiliations, industry\-sponsored Ph\.D\. programs, and joint grant programs which align with and elevate industry perspectives\(Whittaker,[2021](https://arxiv.org/html/2605.06806#bib.bib49)\)\.Bak\-Colemanet al\.\([2025](https://arxiv.org/html/2605.06806#bib.bib143)\)identify several mechanisms by which AI companies skew research findings including selective publishing of internal research, biased study design, funding of academic research, inhibiting independent research, and selective collaborations\. Big Tech is also a major sponsor for preeminent venues for research on algorithmic harms —*e\.g\.*, the ACM Conferences on Fairness, Accountability, and Transparency \(ACM FAccT\); on AI, Ethics, and Society \(AAAI/ACM AIES\); and on Human Factors in Computing Systems \(ACM CHI\) — with many conference organizers and participants being affiliated with and drawing salary from these corporations\(Younget al\.,[2022](https://arxiv.org/html/2605.06806#bib.bib96)\)\. Such sponsorship serves to bolster the image of corporations as socially responsible; influence events, decisions, and research; and identify academics who can be leveraged\(Abdalla and Abdalla,[2021](https://arxiv.org/html/2605.06806#bib.bib54)\)\. These schemes to influence academic knowledge production also resemble strategies previously operationalized by the likes of Big Tobacco, Big Pharma, and Big Oil\(Abdalla and Abdalla,[2021](https://arxiv.org/html/2605.06806#bib.bib54); Younget al\.,[2022](https://arxiv.org/html/2605.06806#bib.bib96)\); and raises serious concerns about research integrity in the field of AI\. #### 2\.1\.2\.Media Capture Capture of media and public discourse\(Schiffrin,[2018](https://arxiv.org/html/2605.06806#bib.bib83); Enikolopov and Petrova,[2015](https://arxiv.org/html/2605.06806#bib.bib84); Hurtet al\.,[2004](https://arxiv.org/html/2605.06806#bib.bib82); Taft,[2024](https://arxiv.org/html/2605.06806#bib.bib70)\)is another avenue for corporations to promote favourable narratives, sway public discourse, and suppress critique\.Stiglitz \([2017](https://arxiv.org/html/2605.06806#bib.bib85)\)identifies four forms of media capture:\(i\)capture by ownership,\(ii\)capture through financial incentives,\(iii\)capture by censorship, and lastly,\(iv\)cognitive capture\.The coverage of AI in mainstream media is consistently aligned with Big Tech\. Companies’ announcements and claims about their products are often reproduced with little, if any, scrutiny while simultaneously downplaying the credibility and expertise of those outside the industry\. In an analysis of1,0001,000media articles “\[c\]orporate actors \[were\] by far the most frequently quoted by journalists covering AI, with no civil society voices featuring amongst the top\-25 most\-quoted people or organisations”\(Tanner and Bryden,[2023](https://arxiv.org/html/2605.06806#bib.bib99)\)\. Similarly, the New York Times quoted representatives of commercial tech companies67%67\\%of the time while only6%6\\%of quotes were from representatives of civil society\(Barakat,[2024](https://arxiv.org/html/2605.06806#bib.bib98)\)\. Moreover, industry insiders were described as ‘experts’, while civil society representatives were presented as ‘sceptics’\. Big Tech also exerts its influence over media through ownership, funding of journalism, shaping state media regulation, targeting media \(policy\) research institutions and universities, and by providing platforms through which the media comes to reach their audiences\. In doing so, Big Tech has created an infrastructure in which media and journalists help reproduce, disseminate, and normalize industry narratives, or face exclusion\(Robins\-Early,[2025](https://arxiv.org/html/2605.06806#bib.bib102)\)\. ### 2\.2\.Regulatory Capture Capture of regulation occurs when powerful industries influence, manipulate, or undermine the governance agencies and authorities that oversee them,controldecisions made about the industry\(Dal Bó,[2006](https://arxiv.org/html/2605.06806#bib.bib40); Levi\-Faur,[2011](https://arxiv.org/html/2605.06806#bib.bib43)\), orsteerregulations towards benefit for the industry rather than \(or at the cost of\) public interest\(Carpenter and Moss,[2013](https://arxiv.org/html/2605.06806#bib.bib41); Li,[2023](https://arxiv.org/html/2605.06806#bib.bib47)\)\.Shapiro \([2012](https://arxiv.org/html/2605.06806#bib.bib155)\)defines it as “occurring when \[regulating\] agencies consistently adopt regulatory policies favo\[u\]red by regulated entities”, which encompasses influence, manipulation, and otherwise interfering with governance processes, rule\-making, and independent oversight\. When regulators and governments become too sympathetic to the problems of the corporations and industries that they regulate, regulatory bodies risk becoming too lenient and, by extension, captured\(Levi\-Faur,[2011](https://arxiv.org/html/2605.06806#bib.bib43)\)\. Regulatory capture can use legal \(e\.g,\. lobbying\) or illegal \(e\.g\., bribes\) tactics and may serve mutual government\-industry interests, e\.g\., through relying on invited feedback from industry actors at the expense of their own research\(Agrell and Gautier,[2012](https://arxiv.org/html/2605.06806#bib.bib46)\)\. Other industries which have attempted regulatory capture have used a broad set of tactics\. For example, the tobacco industry uses information manipulation, legal preemption, threats, financial incentives, and policy substitution\(Savellet al\.,[2014](https://arxiv.org/html/2605.06806#bib.bib63)\); while the pharmaceutical industry relies on lobbying and political influencing, such as by leveraging relationships with drug approval agencies, revolving doors, and challenging regulation through marketing campaigns, funding think tanks, and patient advocacy groups\(Vertinsky,[2021](https://arxiv.org/html/2605.06806#bib.bib59); Morgan and Duffy,[2019](https://arxiv.org/html/2605.06806#bib.bib88)\)\. Regulatory capture can also take the form of developing policy based on industry perspectives or by allowing industry influence to set policy agendas, manipulate information, or directly influence political\(Estacheet al\.,[2011](https://arxiv.org/html/2605.06806#bib.bib73)\)and other forms of decision\-making processes\(Li,[2023](https://arxiv.org/html/2605.06806#bib.bib47)\)\. AI companies are found to evade regulatory enforcement and pressure regulators to change policy, for example, by withholding their digital services in relevant jurisdictions\(Lancieriet al\.,[2024](https://arxiv.org/html/2605.06806#bib.bib51)\)while at the same time establishing partnerships with government agencies to redesign digital public infrastructure, which foster dependencies and capture government agencies\(Baykurt,[2025](https://arxiv.org/html/2605.06806#bib.bib142)\)\. In particular, the European Union’s technology bills, including those for AI, are the biggest target of corporate lobbying\(Ceruluset al\.,[2025](https://arxiv.org/html/2605.06806#bib.bib93)\)with tech companies recruiting large internal policy teams\(Hallet al\.,[2025](https://arxiv.org/html/2605.06806#bib.bib94)\)\. Documents obtained by Corporate Europe Observatory\(Schyns,[2023](https://arxiv.org/html/2605.06806#bib.bib90)\)demonstrate how “\[v\]ia years of direct pressure, covert groups, tech\-funded experts \[…\] tech companies have reduced safety obligations, sidelined human rights and anti\-discrimination concerns, and secured regulatory carveouts for some of their key AI products”\.Gorwaet al\.\([2024](https://arxiv.org/html/2605.06806#bib.bib91)\)andWeiet al\.\([2024](https://arxiv.org/html/2605.06806#bib.bib48)\)identify access lobbying, coalition building, stakeholder mobilisation, funding, agenda\-setting, academic capture, information management, cultural capture through status, and media capture through freedom of information requests and interviews with policy experts, respectively\. ## 3\.Methodology ### 3\.1\.Design Science Research Approach We approach the study of capture mechanisms and associated prevalent narratives surrounding capture of AI regulation, through the methodological lens ofDesign Science Research\(DSR\)\. DSR is a problem\-solving paradigm that seeks to enhance in\-depth and situated knowledge via the development ofartifacts\(vom Brockeet al\.,[2020](https://arxiv.org/html/2605.06806#bib.bib24)\), originally intended to solve identified problems in contexts in which \(a\) existing theory is insufficient to explain the phenomena under study and \(b\) in which various aspects \(people, organizations, technologies, institutions or other factors\) need to be studied empirically and holistically to be comprehensively understood\(Hevneret al\.,[2004](https://arxiv.org/html/2605.06806#bib.bib20)\)\. The designed artifact typically aims to support the knowledge problem and may take various forms\(Offermannet al\.,[2010](https://arxiv.org/html/2605.06806#bib.bib22); Weigandet al\.,[2021](https://arxiv.org/html/2605.06806#bib.bib32)\), which do not need be clear upfront\. We identified a need for a taxonomy of mechanisms enabling capture as it quickly became clear that regulatory capture is prominent in global AI discussions, major multilateral events, as well as day\-to\-day information exchange such as news articles\. And while various advocacy groups and authorities have reported valuable insights on various \(sets of\) capture mechanisms \(see Section[2](https://arxiv.org/html/2605.06806#S2)\), a clear taxonomy and conceptual interpretation of corporate capture across all known or reported mechanisms was lacking\. A DSR project is embodied by three closely related cycles of activities of understanding the problem environment \(relevance cycle\), grounding a problem in theory and concepts \(rigour cycle\), and designing and evaluating the artifact to support a knowledge need or solve a particular problem \(design cycle\)\(Hevner,[2007](https://arxiv.org/html/2605.06806#bib.bib33)\)\. These cycles may happen in arbitrary order based on the structure of the research project, occur sequentially or in parallel, and may be iteratively revisited\. Our design science research followed the following six cycles: Cycle \(1\)relevance:To gain an in\-depth understanding of the prevalence of corporate capture, we engaged in desk research and expert exchange\. A group of seven experts gathered on a weekly basis for over a period of 10 months and discussed regulatory capture based on news articles that had surfaced in the individuals news consumption, and which were selected based on two inclusion criteria: Relevance–a report is about or directly related to AI regulation–and credibility–which captures the reputability of the publishing venue and the clarity and quality of the reported evidence\. We excluded reports that were not directly related ot AI regulation, e\.g\., meta\-analyses, opinion pieces, etc\., and reports with unsubstantiated claims\. The notes taken during desk research and expert meetings formed the basis for a literature review and the initial taxonomy development\. In this cycle, we also determined the criteria for evaluating the taxonomy:Comprehensiveness–do the identified capture mechanisms cover the phenomena reported in the news in an exhaustive, to the extent possible, and mutually exclusive manner?–andlegibility–are the mechanisms and their categories and definitions clear enough to be interpreted consistently across a group of experts? To address the potential of remaining confirmation bias in the selection of articles in this first cycle, we decided on mitigating measures for evaluation \(random sampling, see Cycle 4\) and application \(a structured systematized search of news articles, see Cycle 5\)\. Cycle \(2\)rigour:To ensure a comprehensive understanding of corporate capture, we performed a focused reading on existing definitions from academic literature and on analyses of current occurrences of capture–primarily through policy reports and commentary analyses from civil society and rights groups\. Insights from this literature review are reported in Section[2](https://arxiv.org/html/2605.06806#S2)\. Cycle \(3\)design and evaluation:Based on the notes gathered from the expert exchange \(Cycle 1\) and the literature review \(Cycle 2\), we developed an initial taxonomy\. This taxonomy included alist of categories of corporate capture andcapture mechanisms within these categories\. We iterated on these following discussions in expert meetings\. We added three additional categories \(*Government adopting industry framing*,*Conflation of public and private interest*, and*No capture*\) to the initial taxonomy andannotation schema \(see Cycle 4\)\. Cycle \(4\)design and evaluation:To further evaluate the taxonomy, we translated it to an annotation schema \(see[SectionA\.2](https://arxiv.org/html/2605.06806#A1.SS2)for the annotation template\) which we applied toDS1, composed of 25 articles randomly drawn from an initial set of 100 news articles collected by experts\. See Section[3\.3](https://arxiv.org/html/2605.06806#S3.SS3)for annotation and coding procedures\. Cycle \(5\)relevance \- applying the taxonomy:We applied the updated taxonomy and annotation schema toDS2, a dataset constructed from a systematic search \(see Section[3\.2](https://arxiv.org/html/2605.06806#S3.SS2)\)\. Further, we extended the annotation schema to extract narratives used to support corporate capture, as reported in the articles\. Cycle \(6\)design and evaluation \- finalizing the taxonomy:The final taxonomy was informed by another abductive step that was based on analyses by the annotators in Cycle 5 and subsequent evaluation by the experts group, focused on comprehensiveness and legibility of the identified mechanisms, their categories, and definitions\. We validated the presence of mechanisms in the studied datasetsDS1andDS2through quantitative analysis of mechanisms and narratives, reported in Section[4\.1](https://arxiv.org/html/2605.06806#S4.SS1)\. We summarize the final taxonomy in Section[4](https://arxiv.org/html/2605.06806#S4)with the full taxonomy and descriptions given in[SectionA\.1](https://arxiv.org/html/2605.06806#A1.SS1)\. ### 3\.2\.Dataset Curation and Search Criteria To study regulatory capture as it unfolds across globally significant events, we conducted an in\-depth scoping review of news media articles, as these seek to report current affairs and reflect key debates and decisions that are taking place in the real world pertaining to the AI industry\. News articles also function as a mechanism for obtaining up\-to\-date information in a fast\-paced and continually shifting landscape\. We curated two datasets:DS1\(25 articles\), used to validate our taxonomy andDS2\(75 articles\), used to quantify and perform in\-depth analysis of capture as reported in Reuters coverage \. DS1 was curated from news articles shared by researchers in a privatediscussion forum\.DS2was curated by searchingGoogle News for articles published around three periods which contain four events critical to the development of AI regulation: The first global AI Summit in the U\.K\. and the EU AI Act trilogue negotiations \(October 2023\-February 2024\), the second global AI Summit in South Korea \(April 2024\-June 2024\), and the third global AI Summit in France \(January 2025\-March 2025\)\.222We searched using the keywords “artificial intelligence policy \{ usa — eu — uk \}, obtaining 10\.691 results for the USA, 8\.106 for the EU, and 5\.832 for the U\.K\. queries, respectively\. Our search results forDS2comprised 24\.629 article URL and title pairs, which we scraped using a Crawlee\-based scraper\.333Crawlee: https://crawlee\.dev\.We de\-duplicatedand sub\-selected articles based on rated quality, reliability, and orientation towards fact\-based reporting\. We selected Reuters, rated as the most reliable and high\-quality source in an analysis of 11\.520 news domains\(Linet al\.,[2023](https://arxiv.org/html/2605.06806#bib.bib92)\), as our single source\. We used the article titles to compile a list of relevant unigrams, and applied this to identify titles containingrelevant terms, resulting in 683 unique articles\.444We kept only articles containingat least one of the following words in their title:\{nvidia, openai, musk, google, meta, ceo, microsoft, apple, deepseek, amazon, summit, ai act, regulat\*, antitrust, policy, plan, rule\}\. We then manually applied our inclusion criteria \(relevanceandcredibility, see Section[3\.1](https://arxiv.org/html/2605.06806#S3.SS1)\) selecting articles based on title, which reduced the article count to 181 unique articles\. From this set, we sampled 75 articles with a stratified sampling across the three periods, resulting in 26 samples for the first period \(UK AI Summit & EU AI Act trilogue negotiations\), 16 samples for the second period \(South Korea AI summit\), and 33 samples for the third period \(France AI Action Summit\)\. We then manually checked the body text of the articles to remove and replace, while maintaining stratification, duplicates – including articles with a high degree of content similarity – and articles that did not cover AI regulations \(despite their relevant titles\)\. Figure 1\.PRISMA diagram for the construction of datasetDS2\. ### 3\.3\.Annotation schema and code book We developed the annotation schema iteratively informed by our taxonomy and expert discussions\. The codebook \(see[SectionA\.2](https://arxiv.org/html/2605.06806#A1.SS2)\) contained fields recording article metadata \(such as the publication date, title, data split – i\.e\.,DS1orDS2– and time period\), mechanism category, and mechanisms and narratives observed\. Each article was annotated independently by two members of the research team\. Annotators were instructed to carefully read each article and annotate: mechanism category, mechanism, narratives used, and excerpts containing the supporting evidence\. The mechanisms and their category were selected from our taxonomy \(see[Section4](https://arxiv.org/html/2605.06806#S4)\), while narratives were given as a free\-text field\. The iterativetaxonomy development allowed the team to consistently align with each other when identifying, documenting, and categorizing information\. To ensure a shared understanding of key concepts, we developed a common vocabulary of definitions, descriptions, and instructions as part of the code book\. Annotators extracted excerpts of the articles as ‘evidence’ for their labels and to assist in later discussions and resolutions\. After preliminary annotation, both annotators had labelled exactly the same mechanisms for 21 of the 100 articles, 44 had at least one shared mechanism, while 35 had no matching annotated mechanisms\. We approach the annotation task as a consensus annotation task\(Dehghanet al\.,[2025](https://arxiv.org/html/2605.06806#bib.bib2)\)\. Once independent annotation had been completed, annotators synchronously considered each disagreed\-upon annotation and collectively decided on a final set of labels based on a manual review of the annotation and evidence collected by both annotators\.555The deliberation processes occurred first synchronously to ensure a shared understanding, then asynchronously with one annotator raising instances for the other annotator to review and propose changes or accept the final label set\. In line withOortwijnet al\.\([2021](https://arxiv.org/html/2605.06806#bib.bib156)\), this phase of disagreement resolution surfaced simple mistakes, due to missed evidence, interpretive disagreement arising from different interpretation of the same evidence, or conceptual misalignment related to misinterpreted taxonomy definitions\. Interpretive disagreement arose in two cases regarding breach of copyright and privacy laws, where both annotators befittingly attributedDisregard existing laws, but only one attributedMisinterpret lawsto reflect that the company also publicly maintained they had operated within the bounds of law\. The distinction between the two mechanisms was further illustrated by reports in which the latter was present, but the former was not\. This was the case for the reported roll\-out of a personal data processing*Consent or Pay*model\. Here, the requirements of law were not disregarded altogether, yet misconstrued to mimic compliance\. An example of conceptual misalignment encountered pertains to an article reporting a partnership between Mistral and the German defence startup Helsing\(Sifted,[2026](https://arxiv.org/html/2605.06806#bib.bib157)\)reported as an attempt from Mistral to build stronger ties with the German government\. This was noted asEconomic coercion of governmentby one annotator but not the other\. The mechanism was subsequently clarified to encompass only explicit threats, and the annotation was dropped\. We excluded mechanisms where consensus could not be reached because the evidence was deemed insufficient by one of the annotators from the final labelled dataset\. Table 1\.Taxonomy of capture mechanisms with brief descriptions\. The highlighted concepts denote five broad, high\-level categories, each further comprising a set of detailed mechanism categories and descriptions\. ## 4\.Mechanisms of Capture and Insights from Quantitative analysis In this section, we present our taxonomy of capture mechanisms and detailed analyses ofDS1andDS2\. ### 4\.1\.Corporate Capture: a Taxonomy of Mechanisms Our taxonomy was developed through the iterative design method \(see Section[3\.1](https://arxiv.org/html/2605.06806#S3.SS1)\) and is informed by prior work on capture occurring in sectors such as Big Tobacco, Big Oil, and Big Pharma \(see Section[2](https://arxiv.org/html/2605.06806#S2)\)\. The taxonomy \(see Table[1](https://arxiv.org/html/2605.06806#S3.T1)and Appendix[A\.1](https://arxiv.org/html/2605.06806#A1.SS1)for an overview and an expanded description of the taxonomy\) is comprised of27 different capture mechanisms across five categories, which describe the variety of tactics and the sophistication of Big AI and the wider tech industry’s corporate capture efforts\. We intend for the taxonomy to function as a ‘living resource’ which will require iterative updating as the landscape and mechanisms of corporate capture change\.  \(a\)Frequency of capture mechanisms inDS1&DS2\.Otherdenotes when a broad category of mechanisms, but no specific mechanism, could be identified\. \(b\)Article count, average mechanisms per article and mechanism category distribution for the 75 articles inDS2covering the three periods\. Figure 2\.Quantitative results for capture mechanisms found in the annotated datasets\. ### 4\.2\.Capture mechanisms: distribution and relationships In Figure[2\(a\)](https://arxiv.org/html/2605.06806#S4.F2.sf1)we present the distribution of the 249 total instances of capture mechanisms: 79 \(32% of all instances\) fromDS1and 170 \(68%\) fromDS2\. We found capture mechanisms present in all but 11 articles fromDS2– labelled asno capture\. We found capture mechanisms in all articles, with the exception of 11 articles fromDS2labelled asNo capture\. This label was not applicable to any of the articles inDS1, as the original set used to form it was deliberately composed of articles that concerned both regulation and narrative capture\. The 10 most frequently identified mechanisms \(50% of instances\) all belong to eitherDiscourse & Epistemic influence\(D&EI\),Elusion of law, orDirect influence on policycategories\. Within this set,Elusion of lawis the most recurring category outside of narrative\-framing activity, and comprises violations \-Disregard existing laws\(17%\) \- and contentious interpretations \-Misinterpret laws\(14%\) \- of antitrust, privacy, copyright and labour laws, as well as other mechanisms of operation against the spirit or letter of the law\. We further found thatLobbyingwas present in 40% and 3% ofDS1andDS2, respectively\. Similarly,Revolving Dooroccurred in 24% and 5% ofDS1andDS2\. These 10 instances of national relevance and high\-profileRevolving Doorwere distributed between the US \(6 cases\), the UK \(3 cases\), and EU \(France, 1 case\)\. Of these 10 cases of conflicting interests, we find that 4 involve ongoingOwnership/Stake in companyby public officials during their appointment to public office and are evenly distributed across the US and UK\. The remaining 125 instances belong to other, less occurring capture mechanisms\. ForDS2, we count reported mechanisms across the three periods shown in Figure[2\(b\)](https://arxiv.org/html/2605.06806#S4.F2.sf2)\. Period A spans five months \(late 2023 to early 2024, surrounding the UK AI Summit and EU AI Act trilogues\), whereas Period B and Period C comprise three months intervals \(around the 2024 Korean AI Summit and the 2025 Paris AI Action Summit, respectively\)\. The article sample size for each period was calibrated against the number of results returned by the search\. We observe that the distribution of labelled categories for each article are stable for Period A and B, with an average of approximately two mechanisms labelled for each article\. For Period C, we find an average of approximately 3 mechanisms labelled for each article\. AcrossDS1andDS2, we find 55 articles labelled forD&EI\. For these cases, each instance is co\-reported with an average of 1\.20 non\-D&EImechanism for eachD&EImechanism \(1\.63 and 1\.03 inDS1andDS2, respectively\)\. ### 4\.3\.Recurring narrative framings and co\-occurrence with capture mechanisms Figure 3\.Frequency of capture narratives acrossDS1andDS2\.Figure[3](https://arxiv.org/html/2605.06806#S4.F3)shows the total occurrences of capture narratives across annotated articles: 49 out of the 100 articles contain narrative\(s\) that attempt to justify capture\. We identify 11 recurring framings across the 49 \(13 inDS1, 52% of its total, and 36 inDS2, 48%\), through a manual thematic clustering of the free\-text annotations of article excerpts which reference the use of narratives\. For any given article, these clusters may appear alone or co\-occur with others\. Regulation stifles innovation\.We found this to be the most frequently occurring narrative \(16% overall; 24% and 13% inDS1andDS2, respectively\), decrying regulation as ontologically at odds with progress\. Red tape\.The second most frequent narrative concerns allegedRed tapewith 15% of all articles labelled for this narrative \(28% ofDS1and 11% ofDS2\)\. Within this narrative, regulation is portrayed as unnecessary, excessive, or obsolete, expressed through phrasing such as “regulatory burden”, “over\-regulation”, “simplification”, and “cutting red tape”\. We further observe that this framing tends to precede more explicit calls for “deregulation” from top\-level regulatory authorities who have historically adopted such narratives and continue to do so within the sequence of arguments in their more recent speeches\(Commission,[2025](https://arxiv.org/html/2605.06806#bib.bib129)\)\. This narrative co\-occurs with mechanisms that imply direct contact with regulators – 6 articles labelled withLobbyingand 6 withPrivate meetings\. National interest\.This narrative aggregates calls against assumed impending threats to specific countries or geopolitical blocs\. Examples include “falling behind” in technology development and more explicit warnings concerning international economic leadership and national security, such as “the AI race”\. Competitiveness\.Competitiveness was the fourth most frequently occurring framing \(12% overall, 24% and 8% inDS1andDS2, respectively\)\. This narrative category consolidates framings around the econometric parameters of productivity and competitiveness as values of higher priority than regulatory oversight\. TheGovernment adopting industry framingmechanism appears most frequently together with theCompetitivenessnarrative \(eight articles\), and to a lesser extent, withRegulation stifles innovationandRed tapenarratives \(five articles each\)\. Inconsistent rulesrefers to the characterisation of existing regulation as unclear or fragmented\. This line of argument occurs in our 7% of our data \(12% ofDS1and 5% ofDS2\) and alleges both that there are difficulties in interpreting existing laws and that AI deployment must be globally uniform\. Such narratives seek to portray regulation as time\-intensive and unfeasible, and at odds with an increasingly globalised technical infrastructure\. In one specific instance, localised regulations were self\-contradictorily described as an obstacle to industry efforts to “accurately understand important regional languages, cultures or trending topics on social media”\(Newsroom,[2024](https://arxiv.org/html/2605.06806#bib.bib144)\)\. First innovation, then regulation\.This family of low\-frequency narratives argue that while regulation is needed, technological development outpaces regulatory innovation, and regulators should wait for technological infrastructure to mature into its full potential prior to regulation\. Lawmakers are further cautioned against regulating technologies asRegulation limits freedoms and rights–another low\-frequency narrative occurring in our data which narrative contrasts regulation with “potentially life\-saving innovations”, and specific provisions for interoperability with “user privacy and data security”\. For instance, mandates to moderate and remove illegal content online are constructed as the legal “institutionalisation of censorship”\. Relatedly, theLawmakers misunderstand \(technology\)narrative suggests that regulators lack the required expertise and understanding of technology to draft appropriate regulation, alleging that companies “are better placed to uncover problems” and that regulatory demands are “unrealistic” or “unnecessary”\. Other, less prevalent supporting narratives include: characterisingAI as a collective need and essential for flourishingthat should not be hindered by law; the neo\-liberal aim toReduce government inefficiencies, which has historically conflated all bureaucracy as unnecessary and inefficient bloat;Self\-regulation, which acknowledges the need for rules, but privileges non\-binding, industry\-led pledges and codes of conduct\. ## 5\.Discussion ### 5\.1\.Summary of findings and implications The results in this paper provide insight into the variety and structure of mechanisms in the analysed news articles\. While our results do not speak to potential correlations or causal effects of the observed mechanisms and regulatory outcomes, they can be used to inform hypotheses about the relationship between the 27 recurring capture mechanisms we have identified and regulatory outcomes\. The extent of corporate capture further motivates the urgent need to characterize and address the growing centralization of power and influence in the hands of a few tech companies and their shareholders\(Institute,[2025](https://arxiv.org/html/2605.06806#bib.bib104)\), and its consequences for human rights\(AFP,[2025](https://arxiv.org/html/2605.06806#bib.bib14)\)\. Our work on capture mechanisms and narratives serves as an additional dimension in the growing chorus of investigations into the political economy of Big Tech \(see also Section[2](https://arxiv.org/html/2605.06806#S2)\)\. Prior work has typically adopted a dichotomous theoretical lens – distinguishing information capture from influence on policymaking\(Shapiro,[2012](https://arxiv.org/html/2605.06806#bib.bib155)\), whereas the most recurring mechanisms in our evidence also include theElusion of law\. These recurring violations and contentious interpretations of antitrust, privacy, copyright and labour laws call into question the effectiveness of enforcement\. Such pressures, along with weakening the mandates of regulatory agencies, risk the normalisation of ade factolaw in which Big AI operates outside the bounds of regulatory scrutiny\. Furthermore, the stochastic nature of the underlying technology–where the technology cannot be reliably tested–coupled with the economic power of Big AI corporations enables a normalisation of “algorithmic states of exception” at scale\. Defined byMcQuillan \([2022](https://arxiv.org/html/2605.06806#bib.bib159)\), this indicates the application of algorithms as the de facto authority in contexts where their behaviour has not been or cannot be tested, creating conditions akin to martial law\. The conjunction of law\-flouting practice and technological affordances thus raises urgent concerns regarding the contemporary integrity of lawmaking institutions over their respective jurisdictions\. Our annotation of narratives employed provide another qualitative avenue for insight into capture strategies, which can be further studied to understand the causal or system impact of different narratives on discourse, public conceptions of AI, and knowledge production\. Our finding that there is substantial growth over time in theDiscourse & Epistemic Influence\(see Figure[2\(b\)](https://arxiv.org/html/2605.06806#S4.F2.sf2)\) indicates that how AI is framed is becoming increasingly important\. The consistent co\-occurrence of each D&EI mechanism with approximately one corresponding mechanism from other categories indicates the significant role that public\-facing campaigns play\. Public facing campaigns often occurred in parallel with lower profile activity such as direct influence on policy, abuse of monopoly power, elusion of law and muddled roles between regulating agencies and the industries to be regulated\. Another core element in our findings is thecentral role played by governments and public officialsacross a subset of mechanisms and narratives\. While our analysis cannot speak to the impact of the regulators entanglement with corporate actors, it corroborates concerns about public institutions’ opposition to regulation and outright efforts to reduce the application of existing regulation in the US, UK, and EU\. Our observations illustrate the tolerance of high\-profileRevolving doors, muddling roles between regulated and regulators, and more severe cases of active*Ownership/Stake in Company*by incumbent public officials\. Such entanglement erodes public trust in the capability and willingness of institutions to scrutinise corporations and enforce laws\. Governments are deeply dependent on Big Tech infrastructure\. AI is broadly perceived by policy makers as central to the economic investments and growth plans of many countries\. However, there are growing concerns regarding a lack of return on investments in AI technologies across economies\(Economist,[2025](https://arxiv.org/html/2605.06806#bib.bib17)\)and circular investments in the AI industry are contributing to increasing systemic risks to the global economy\(Arun,[2025](https://arxiv.org/html/2605.06806#bib.bib18)\)\. Of special concern are a cluster of narratives calling to*Reduce government inefficiencies*through the wholesale replacement of civil servants with experimental AI technologies at the behest of private interests\(Whittaker,[2025](https://arxiv.org/html/2605.06806#bib.bib163)\)\. This threatens public administrations by hindering their capacity to deliver citizen\-facing public services, support regulatory agencies and their vetting of “the validity of industry policy proposals”, adequately, and more broadly plan nation\-wide civil service requirements\(Shapiro,[2012](https://arxiv.org/html/2605.06806#bib.bib155); Wellstead and Howlett,[2026](https://arxiv.org/html/2605.06806#bib.bib164)\)\. Compounding such economic risks with the impacts of capture on public values such as safety, health and fundamental rights provides a growing case for governments to reconsider their AI strategy and reflect on how their role in promoting corporate capture may be at odds with the public interest\. #### 5\.1\.1\.Lessons from adjacent movements Many of the mechanisms of capture used by Big AI, that we have discussed in this paper, mirror strategies that have historically been applied by similar industries such as Big Tobacco, Big Pharma, and Big Oil\. Civil society’s efforts to hold big corporations accountable in these sectors is ongoing and has been met with significant challenges and has often fallen short of meaningfully countering corporate power\(Gayle,[2025](https://arxiv.org/html/2605.06806#bib.bib147); The American Lung Association,[2025](https://arxiv.org/html/2605.06806#bib.bib148); Health, Education, Labor, and Pensions Committee \(Chair: Bernard Sanders\),[2024](https://arxiv.org/html/2605.06806#bib.bib149)\)\. Yet, there are remain lesson to be learned from these efforts and the braoder scholarship on corporate capture\. For example, the OECD report on preventing policy capture in public decision\-making\(OECD,[2017](https://arxiv.org/html/2605.06806#bib.bib154)\)recommends to\(i\)level the playing field by engaging diverse stakeholders,\(ii\)ensure transparency and access to information,\(iii\)promote accountability via external control, effective competition, and regulatory policies, and\(iv\)define clear institutional codes of conduct, promote cultures of integrity, and establish appropriate frameworks for risk\-management\.Similarly, in the context of Big Tobacco,Lee \([2025](https://arxiv.org/html/2605.06806#bib.bib152)\)calls for appropriate separation between public and private interests, binding rules for government\-industry interactions to manage conflicts\-of\-interests, enforcement of transparency and accountability practices, and safeguarding academic knowledge production from undue industry influence\. A 2024 report\(AI Now Institute,[2024](https://arxiv.org/html/2605.06806#bib.bib158)\)further calls for applying transferable lessons from the US Food and Drug Administration on how to regulate and hold Big AI accountable\. However, implementing such safeguards becomes a challenge when significant corporate capture processes are in progress\. Under these circumstances, activism, collective organizing, and public campaigning can serve to build collective power and pressure policymakers and regulatory bodies to foreground public interests\(Rogerset al\.,[2025](https://arxiv.org/html/2605.06806#bib.bib150); Thomas\-Walterset al\.,[2025](https://arxiv.org/html/2605.06806#bib.bib151); Harvey and Foley,[2021](https://arxiv.org/html/2605.06806#bib.bib153)\)\. For example,Harvey and Foley \([2021](https://arxiv.org/html/2605.06806#bib.bib153)\)describe the role of public organising in 2021 during the COVID\-19 pandemic to successfully challenge Big Pharma and pressure the US government to change its stance on the temporary waiver of the World Trade Organisation TRIPS rules to increase global production of COVID\-related health technologies\. Similarly,Thomas\-Walterset al\.\([2025](https://arxiv.org/html/2605.06806#bib.bib151)\)found strong evidence that climate activism can shift public opinion and media representation, and moderate evidence that it can influence voting and how politicians communicate\. Affecting change also requires a mix of tactics spanning protests, lawsuits, lobbying, economic pressure, and coalition\-building\. These collective efforts benefit from identifying critical pressure points, making bolder demands, and strategic coordination between actions\(Rogerset al\.,[2025](https://arxiv.org/html/2605.06806#bib.bib150)\)\. #### 5\.1\.2\.Counter narratives and resistance to Big AI capture Our work on conceptualising and understanding capture supports the growing number of calls and actions to counter corporate capture, dominant narratives, and more broadly, policy agendas that prioritise corporate agenda over public interests\. The qualitative insights into the mechanisms that enable capture \(and to a lesser extent the frequency with which these are reported on\) illuminate intervention points for resistance and efforts to promote alternative regulatory interests and counter narratives\. These include, but are not limited to:\(1\)work from civil society organisationsthat advise on regulatory implementation, standard setting, or strategic litigation\(European Center for Not\-for\-Profit Law \(ECNL\) and European AI & Society Fund,[2024](https://arxiv.org/html/2605.06806#bib.bib28); \(BEUC\),[2026](https://arxiv.org/html/2605.06806#bib.bib5); for Democracy & Technology,[2026a](https://arxiv.org/html/2605.06806#bib.bib6)\),\(2\)efforts to promote shared narratives and bottom\-up agendasfor AI policy that are grounded in the experience of members of the public\(AI Now Institute,[2025](https://arxiv.org/html/2605.06806#bib.bib26); on AI,[2026](https://arxiv.org/html/2605.06806#bib.bib10)\),\(3\)work that uncovers the influences that facilitate capture, in particular regarding lobbying and deregulation\(LobbyFacts,[2026](https://arxiv.org/html/2605.06806#bib.bib12); for Democracy & Technology,[2026b](https://arxiv.org/html/2605.06806#bib.bib11)\),\(4\)independent investigative journalismthat continues to document and expose numerous issues across the AI supply\-chain from corporate power to discriminatory AI systems\(Reports,[2026](https://arxiv.org/html/2605.06806#bib.bib165); Bellingcat,[2026](https://arxiv.org/html/2605.06806#bib.bib166); Cadwalladr,[2026](https://arxiv.org/html/2605.06806#bib.bib167); Media,[2026](https://arxiv.org/html/2605.06806#bib.bib168)\),\(5\)independent auditsthat provide rigorous and reliable evidence of the workings of AI systems\(Romanoet al\.,[2024](https://arxiv.org/html/2605.06806#bib.bib162)\), with adequate investment on audit target identification and dissemination of results to support advocacy for fundamental rights\(Birhaneet al\.,[2024](https://arxiv.org/html/2605.06806#bib.bib160); Ojewaleet al\.,[2025](https://arxiv.org/html/2605.06806#bib.bib161)\),\(6\)efforts that mobilize labour movements to address power asymmetries\(Merchant,[2025](https://arxiv.org/html/2605.06806#bib.bib15)\),\(7\)efforts to invigorate existingcommitments to climate and environmental protection\(Green Screen Coalition,[2025](https://arxiv.org/html/2605.06806#bib.bib16)\), and\(8\)work that highlights thenecessity to respect rightswithin and outside the boundaries of regulatory concerns\(\(EDRi\),[2025](https://arxiv.org/html/2605.06806#bib.bib7); for Civil Liberties,[2023](https://arxiv.org/html/2605.06806#bib.bib13)\)\. Supporting, uplifting, and strengthening such efforts is key to understanding, challenging, and dismantling corporate capture while advancing regulation in the public interest\. Finally, the findings in this paper have implications for scholarship in FAccT and other communities\. The taxonomy provides a scaffolding for other studies and investigations into corporate capture, which can be further refined and enriched\. It motivates the need to further reflect on the impact of capture on our own scholarship, what that means for the community’s contributions to AI regulation, and how we ensure that scientific integrity and plurality of perspectives are protected in our scholarship and across broader scientific communities and, indirectly, in the associated societies whose institutions lean on our work\(Whittaker,[2021](https://arxiv.org/html/2605.06806#bib.bib49)\)\. ## 6\.Conclusion In this study, we have examined the strategies and tactics employed by the Big AI industry to capture regulatory processes\. Our study does reflects an urgent academic endeavour and a pressing real\-world issue with direct implications for people across the globe\. While democratic regulators ought to attend the concerns of industrial sectors, regulation should always prioritise protecting and promoting the core public values for which governments bear responsibility\. The AI industry’s power, wealth and influence has far\-reaching implications in terms of its impact on the rule of law, the labour market, the environment, knowledge production, and, ultimately, on democratic processes and institutions is so corrosive that policymakers ought to treat it as an emergency\. Although our findings show the breadth and depth of regulatory capture with no clear immediate path to meaningful accountability and systems of truly independent oversight, we hope this work can help lay a foundation for developing and institutionalizing corrective measures\. This might, for example, take the form of enhancing existing civil society efforts to document and expose the breadth and extent of mechanisms being used, pressuring regulators for increased transparency and accountability in the rule\-making processes, and implementing declarations of conflict of interest in regulatory processes in addition to major conference and scientific publication venues in relation to work on societal impacts of AI\. Government complicity is detrimental to ensuring the rule of law and to restoring trust in public interest technologies\. Therefore, meaningful regulations and enforcement, developed and enforced in line with the interests of the general public and vulnerable groups, is in the interest of government, regulatory institutions, and the AI industry itself\. ## 7\.Limitations We have taken all possible measure within our means and control to curate our datasets\. Nonetheless, our findings may be influenced by sampling bias as well as to changes in reporting style and customs within Reuters\. Future larger\-scale sampling and annotation \- by including more events crucial to consensus\-building in preparation of regulation, and extending time windows and keywords \- may provide an improved quantification of the issues at hand\. Notwithstanding the large consensus on the high quality of the sources we selected\(Linet al\.,[2023](https://arxiv.org/html/2605.06806#bib.bib92)\), all publications exist within a larger social and media ecosystem subject to the effects of media capture mentioned in Section[2](https://arxiv.org/html/2605.06806#S2)\. Comparative analysis across geographically and culturally diverse news agencies and publication venues may thus provide improved insight into reporting biases, and afford more refined estimators for the occurrence of both mechanisms of capture and narratives\. Additionally, such a pluralistic effort may inform further validation and development of our taxonomy of mechanisms of capture\. ## 8\.Generative AI Usage Statement The authors declare that no form of generative AI was used in the writing of this paper or at any stage of the research process, other than as a support to a senior software engineer in the development of charts\. All generated code was thoroughly reviewed line\-by\-line, edited and tested against its intended specification\. ## 9\.Acknowledgments We would like to thank Matt Davies, Petter Ericson, and two experts, a legal scholar from a civil society organisation and an employee at a big corporation, respectively, who prefer to remain anonymous, for their invaluable feedback on this work\. We are also grateful for the thorough feedback from the FAccT anonymous reviewers\. The AI Accountability Lab is supported by grants from the John D\. and Catherine T\. MacArthur Foundation, the AI Collaborative of the Omidyar Group, Luminate Foundation, European AI & Society Fund, and Bestseller Foundation\. Zeerak Talat is supported by the Arts and Humanities Research Council \(grant AH/X007146/1\)\. ## References - E\. C\. O\. \(BEUC\) \(2026\)Open joint letter on the digital omnibus on ai preserving the scope and integrity of the ai act\.Note:[https://www\.beuc\.eu/letters/open\-joint\-letter\-digital\-omnibus\-ai\-preserving\-scope\-and\-integrity\-ai\-act](https://www.beuc.eu/letters/open-joint-letter-digital-omnibus-ai-preserving-scope-and-integrity-ai-act)Cited by:[item 1](https://arxiv.org/html/2605.06806#S5.I2.i1.2)\. - E\. D\. R\. \(EDRi\) \(2025\)The eu must uphold hard\-won protections for digital human rights\.Note:[https://edri\.org/wp\-content/uploads/2025/11/The\-EU\-must\-uphold\-hard\-won\-protections\-for\-digital\-human\-rights\.pdf](https://edri.org/wp-content/uploads/2025/11/The-EU-must-uphold-hard-won-protections-for-digital-human-rights.pdf)Cited by:[item 8](https://arxiv.org/html/2605.06806#S5.I2.i8.1)\. - M\. Abdalla and M\. Abdalla \(2021\)The grey hoodie project: big tobacco, big tech, and the threat on academic integrity\.InProceedings of the 2021 AAAI/ACM Conference on AI, Ethics, and Society,pp\. 287–297\.Cited by:[§1](https://arxiv.org/html/2605.06806#S1.p7.1),[§2\.1\.1](https://arxiv.org/html/2605.06806#S2.SS1.SSS1.p1.1),[§2\.1\.1](https://arxiv.org/html/2605.06806#S2.SS1.SSS1.p3.1)\. - AFP \(2025\)Concentration of corporate power a ’huge’ concern: UN rights chief\.\(en\-US\)\.External Links:[Link](https://us.afpnews.com/article/?concentration-of-corporate-power-a-huge-concern-un-rights-chief,83MG3CQ)Cited by:[§5\.1](https://arxiv.org/html/2605.06806#S5.SS1.p1.1)\. - P\. J\. Agrell and A\. Gautier \(2012\)Rethinking regulatory capture\.InRecent advances in the analysis of competition policy and regulation,Cited by:[§2\.2](https://arxiv.org/html/2605.06806#S2.SS2.p1.1)\. - N\. Ahmed, M\. Wahed, and N\. C\. Thompson \(2023\)The growing influence of industry in ai research\.Science379\(6635\),pp\. 884–886\.Cited by:[§2\.1\.1](https://arxiv.org/html/2605.06806#S2.SS1.SSS1.p2.1)\. - AI Now Institute \(2024\)Lessons from the fda for ai\.External Links:[Link](https://ainowinstitute.org/publications/research/lessons-from-the-fda-for-ai)Cited by:[§5\.1\.1](https://arxiv.org/html/2605.06806#S5.SS1.SSS1.p1.1)\. - AI Now Institute \(2025\)People’s AI Action Plan Launches to Provide Counter\-Weight to Trump’s Industry\-Backed AI Plan and EOs\.\(en\-US\)\.External Links:[Link](https://ainowinstitute.org/news/announcement/peoples-ai-action-plan-launches-to-provide-counter-weight-to-trumps-industry-backed-ai-plan-and-eos)Cited by:[§1](https://arxiv.org/html/2605.06806#S1.p2.1),[item 2](https://arxiv.org/html/2605.06806#S5.I2.i2.2)\. - L\. Amin \(2025\)Revealed: “Shocking” scale of Big Tech’s influence over Labour — democracyforsale\.substack\.com\.Note:[https://democracyforsale\.substack\.com/p/revealed\-shocking\-scale\-of\-big\-tech\-influence\-labour\-peter\-kyle\-amazon\-google\-meta](https://democracyforsale.substack.com/p/revealed-shocking-scale-of-big-tech-influence-labour-peter-kyle-amazon-google-meta)\[Accessed 26\-12\-2025\]Cited by:[§1](https://arxiv.org/html/2605.06806#S1.p4.1)\. - A\. Arun \(2025\)Bubble or Nothing\.\(en\-US\)\.External Links:[Link](https://publicenterprise.org/report/bubble-or-nothing/)Cited by:[§5\.1](https://arxiv.org/html/2605.06806#S5.SS1.p6.1)\. - A\. Baba, D\. M\. Cook, T\. O\. McGarity, and L\. A\. Bero \(2005\)Legislating “sound science”: the role of the tobacco industry\.American Journal of Public Health95\(S1\),pp\. S20–S27\.Cited by:[§2\.1\.1](https://arxiv.org/html/2605.06806#S2.SS1.SSS1.p1.1)\. - J\. Bak\-Coleman, C\. O’Connor, C\. Bergstrom, and J\. West \(2025\)The risks of industry influence in tech research\.arXiv preprint arXiv:2510\.19894\.Cited by:[§2\.1\.1](https://arxiv.org/html/2605.06806#S2.SS1.SSS1.p2.1)\. - H\. Barakat \(2024\)Selective perspectives: a content analysis of the new york times’ reporting on artificial intelligence\.Computer Says Maybe\.Cited by:[§2\.1\.2](https://arxiv.org/html/2605.06806#S2.SS1.SSS2.p2.3)\. - B\. Baykurt \(2025\)Gov\-tech as capture: public infrastructures under data capitalism\.Information, Communication & Society,pp\. 1–16\.Cited by:[§2\.2](https://arxiv.org/html/2605.06806#S2.SS2.p2.1)\. - BBC \(2025\)UK social media campaigners among five denied US visas — bbc\.com\.Note:[https://www\.bbc\.com/news/articles/cp39kngz008o](https://www.bbc.com/news/articles/cp39kngz008o)\[Accessed 26\-12\-2025\]Cited by:[§1](https://arxiv.org/html/2605.06806#S1.p4.1)\. - R\. Bellan \(2025\)Silicon Valley is pouring millions into pro\-AI PACs to sway midterms — TechCrunch — techcrunch\.com\.Note:[https://techcrunch\.com/2025/08/25/silicon\-valley\-is\-pouring\-millions\-into\-pro\-ai\-pacs\-to\-sway\-midterms/](https://techcrunch.com/2025/08/25/silicon-valley-is-pouring-millions-into-pro-ai-pacs-to-sway-midterms/)\[Accessed 27\-12\-2025\]Cited by:[§1](https://arxiv.org/html/2605.06806#S1.p6.1)\. - Bellingcat \(2026\)External Links:[Link](https://www.bellingcat.com/)Cited by:[item 4](https://arxiv.org/html/2605.06806#S5.I2.i4.2)\. - A\. Birhane, R\. Steed, V\. Ojewale, B\. Vecchione, and I\. D\. Raji \(2024\)AI auditing: the broken bus on the road to AI accountability\.In2024 IEEE Conference on Secure and Trustworthy Machine Learning \(SaTML\),pp\. 612–643\.External Links:ISBN 979\-8\-3503\-4950\-4,[Link](https://ieeexplore.ieee.org/document/10516659/),[Document](https://dx.doi.org/10.1109/SaTML59370.2024.00037)Cited by:[item 5](https://arxiv.org/html/2605.06806#S5.I2.i5.2)\. - L\. Cabrera, L\. Caroli, and D\.E\. Harris \(2025\)Human rights are universal, not optional: don’t undermine the eu ai act with a faulty code of practice\.Tech Policy Press\.Cited by:[§1](https://arxiv.org/html/2605.06806#S1.p5.1)\. - C\. Cadwalladr \(2026\)The nerve\.External Links:[Link](https://www.thenerve.news/)Cited by:[item 4](https://arxiv.org/html/2605.06806#S5.I2.i4.2)\. - D\. Carpenter and D\. A\. Moss \(2013\)Preventing regulatory capture: special interest influence and how to limit it\.Cambridge University Press\.Cited by:[§2\.2](https://arxiv.org/html/2605.06806#S2.SS2.p1.1)\. - D\. M\. Carr \(2003\)Pfizer’s epidemic: a need for international regulation of human experimentation in developing countries\.Case W\. Res\. J\. Int’l L\.35,pp\. 15\.Cited by:[§1](https://arxiv.org/html/2605.06806#S1.p3.1)\. - L\. Cerulus, H\. Cokelaere, M\. Gros, and B\. Brzeziński \(2025\)Ranked: the 10 most intensely lobbied eu laws\.Politico\.Cited by:[§2\.2](https://arxiv.org/html/2605.06806#S2.SS2.p2.1)\. - P\. Citizen \(2025a\)Deleting Tech Enforcement \- Public Citizen — citizen\.org\.Note:[https://www\.citizen\.org/article/deleting\-enforcement\-trump\-big\-tech\-billion\-report/](https://www.citizen.org/article/deleting-enforcement-trump-big-tech-billion-report/)\[Accessed 27\-12\-2025\]Cited by:[§1](https://arxiv.org/html/2605.06806#S1.p5.1)\. - P\. Citizen \(2025b\)How Trump Is Halting Enforcement Against Corporate Lawbreakers — citizen\.org\.Note:[https://www\.citizen\.org/article/corporate\-clemency\-trump\-enforcement\-report/](https://www.citizen.org/article/corporate-clemency-trump-enforcement-report/)\[Accessed 27\-12\-2025\]Cited by:[§1](https://arxiv.org/html/2605.06806#S1.p5.1)\. - E\. Commission \(2025\)Speech by President von der Leyen at the Copenhagen Competitiveness Summit — luxembourg\.representation\.ec\.europa\.eu\.Note:[https://luxembourg\.representation\.ec\.europa\.eu/actualites\-et\-evenements/actualites/speech\-president\-von\-der\-leyen\-copenhagen\-competitiveness\-summit\-2025\-10\-01\_en](https://luxembourg.representation.ec.europa.eu/actualites-et-evenements/actualites/speech-president-von-der-leyen-copenhagen-competitiveness-summit-2025-10-01_en)\[Accessed 27\-12\-2025\]Cited by:[§1](https://arxiv.org/html/2605.06806#S1.p5.1),[§4\.3](https://arxiv.org/html/2605.06806#S4.SS3.p3.1)\. - Corporate Europe Observatory \(2025\)Bias baked in: How Big Tech sets its own AI standards\.\(en\)\.External Links:[Link](https://corporateeurope.org/en/2025/01/bias-baked)Cited by:[§1](https://arxiv.org/html/2605.06806#S1.p2.1)\. - C\. F\. Cranor \(2008\)The tobacco strategy entrenched\.American Association for the Advancement of Science\.Cited by:[§2\.1\.1](https://arxiv.org/html/2605.06806#S2.SS1.SSS1.p1.1)\. - E\. Dal Bó \(2006\)Regulatory capture: a review\.Oxford review of economic policy22\(2\),pp\. 203–225\.Cited by:[§2\.2](https://arxiv.org/html/2605.06806#S2.SS2.p1.1)\. - K\. De Liban \(2024\)Inescapable ai: the ways ai decides how low\-income people work, live, learn, and survive\.Techtonic Justice\. https://www\.techtonicjustice\.org/reports/inescapable\-ai\.Cited by:[§1](https://arxiv.org/html/2605.06806#S1.p1.1)\. - S\. Dehghan, M\. U\. Sen, and B\. Yanikoglu \(2025\)Dealing with Annotator Disagreement in Hate Speech Classification\.arXiv\.Note:arXiv:2502\.08266 \[cs\]Comment: 20 pages, 3 TablesExternal Links:[Link](http://arxiv.org/abs/2502.08266),[Document](https://dx.doi.org/10.48550/arXiv.2502.08266)Cited by:[§3\.3](https://arxiv.org/html/2605.06806#S3.SS3.p2.1)\. - R\. Dobbe \(2022\)System safety and artificial intelligence\.InProceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency,pp\. 1584–1584\.Cited by:[§1](https://arxiv.org/html/2605.06806#S1.p1.1)\. - T\. Economist \(2025\)Investors expect AI use to soar\. That’s not happening\.The Economist\.External Links:ISSN 0013\-0613,[Link](https://www.economist.com/finance-and-economics/2025/11/26/investors-expect-ai-use-to-soar-thats-not-happening)Cited by:[§5\.1](https://arxiv.org/html/2605.06806#S5.SS1.p6.1)\. - R\. Enikolopov and M\. Petrova \(2015\)Media capture: empirical evidence\.InHandbook of media economics,Vol\.1,pp\. 687–700\.Cited by:[§2\.1\.2](https://arxiv.org/html/2605.06806#S2.SS1.SSS2.p1.1)\. - A\. Estache, L\. Wren\-Lewis, S\. Rose\-Ackerman, and T\. Soreide \(2011\)Anti\-corruption policy in theories of sector regulation\.Chapter9,pp\. 269–299\.Cited by:[§2\.2](https://arxiv.org/html/2605.06806#S2.SS2.p1.1)\. - EU \(2018\)Antitrust: Commission fines Google €4\.34 billion for illegal practices regarding Android mobile devices to strengthen dominance of Googleś search engine — ec\.europa\.eu\.Note:[https://ec\.europa\.eu/commission/presscorner/detail/en/ip\_18\_4581](https://ec.europa.eu/commission/presscorner/detail/en/ip_18_4581)\[Accessed 23\-12\-2025\]Cited by:[§1](https://arxiv.org/html/2605.06806#S1.p1.1)\. - EU \(2023\)EU ai act first regulation on artificial intelligence\.Note:https://www\.europarl\.europa\.eu/topics/en/article/20230601STO93804/eu\-ai\-act\-first\-regulation\-on\-artificial\-intelligenceCited by:[§1](https://arxiv.org/html/2605.06806#S1.p1.1)\. - European Center for Not\-for\-Profit Law \(ECNL\) and European AI & Society Fund \(2024\)Towards an AI Act that serves people and society: Strategic actions for civil society and funders on the enforcement of the EU AI Act\.External Links:[Link](https://ecnl.org/sites/default/files/2024-08/241508_AIAct%20implementation_ECNL%20report_final%20design.pdf)Cited by:[§1](https://arxiv.org/html/2605.06806#S1.p2.1),[item 1](https://arxiv.org/html/2605.06806#S5.I2.i1.2)\. - European Commission \(2025\)Simpler digital rules to help EU businesses grow\.\(en\)\.External Links:[Link](https://commission.europa.eu/news-and-media/news/simpler-digital-rules-help-eu-businesses-grow-2025-11-19_en)Cited by:[§1](https://arxiv.org/html/2605.06806#S1.p2.1)\. - S\. Feldstein \(2019\)The global expansion of ai surveillance\.Vol\.17,Carnegie Endowment for International Peace Washington, DC\.Cited by:[§1](https://arxiv.org/html/2605.06806#S1.p1.1)\. - I\. C\. for Civil Liberties \(2023\)The european commission must follow ireland’s lead, and switch off big tech’s toxic algorithms\.Note:[https://www\.iccl\.ie/2023/the\-european\-commission\-must\-follow\-irelands\-lead\-and\-switch\-off\-big\-techs\-toxic\-algorithms/](https://www.iccl.ie/2023/the-european-commission-must-follow-irelands-lead-and-switch-off-big-techs-toxic-algorithms/)Cited by:[item 8](https://arxiv.org/html/2605.06806#S5.I2.i8.1)\. - C\. for Democracy & Technology \(2026a\)Joint open letter: preserving the scope and integrity of the ai act\.Note:https://cdt\.org/insights/joint\-open\-letter\-preserving\-the\-scope\-and\-integrity\-of\-the\-ai\-act/Cited by:[item 1](https://arxiv.org/html/2605.06806#S5.I2.i1.2)\. - C\. for Democracy & Technology \(2026b\)This is what corporate capture looks like\! report: how corporations run the eu deregulation agenda\.Note:[https://corporateeurope\.org/en/2026/04/what\-corporate\-capture\-looks](https://corporateeurope.org/en/2026/04/what-corporate-capture-looks)Cited by:[item 3](https://arxiv.org/html/2605.06806#S5.I2.i3.2)\. - FragDenStaat \(2025\)Koalitionsverhandlungen CDU/CSU/SPD AG 3 \- Digitales — fragdenstaat\.de\.Note:[https://fragdenstaat\.de/dokumente/258016\-koalitionsverhandlungen\-cdu\-csu\-spd\-ag\-3\-digitales/](https://fragdenstaat.de/dokumente/258016-koalitionsverhandlungen-cdu-csu-spd-ag-3-digitales/)\[Accessed 27\-12\-2025\]Cited by:[§1](https://arxiv.org/html/2605.06806#S1.p5.1)\. - D\. Gayle \(2025\)Cop30 was meant to be a turning point, so why do some say the climate summit is broken?\.The Guardian\.Cited by:[§5\.1\.1](https://arxiv.org/html/2605.06806#S5.SS1.SSS1.p1.1)\. - R\. Gorwa, G\. Lechowski, and D\. Schneiß \(2024\)Platform lobbying: policy influence strategies and the eu’s digital services act\.Internet Policy Review13\(2\),pp\. 1–26\.Cited by:[§2\.2](https://arxiv.org/html/2605.06806#S2.SS2.p2.1)\. - Green Screen Coalition \(2025\)Within Bounds: Limiting AI’s environmental impact\.\(en\)\.Note:Section: blogExternal Links:[Link](https://greenscreen.network/en/blog/within-bounds-limiting-ai-environmental-impact/)Cited by:[item 7](https://arxiv.org/html/2605.06806#S5.I2.i7.1)\. - A\. B\. Hall, A\. Sun, and G\. Stanford \(2025\)Investing in political expertise: the remarkable scale of corporate policy teams\.Cited by:[§2\.2](https://arxiv.org/html/2605.06806#S2.SS2.p2.1)\. - R\. Harvey and M\. Foley \(2021\)How to fight big pharma — and win\.External Links:[Link](https://truthout.org/articles/how-to-fight-big-pharma-and-win/)Cited by:[§5\.1\.1](https://arxiv.org/html/2605.06806#S5.SS1.SSS1.p2.1)\. - Health, Education, Labor, and Pensions Committee \(Chair: Bernard Sanders\) \(2024\)Big pharma’s business model: corporate greed\.External Links:[Link](https://www.help.senate.gov/imo/media/doc/big_pharmas_business_model_report.pdf)Cited by:[§5\.1\.1](https://arxiv.org/html/2605.06806#S5.SS1.SSS1.p1.1)\. - A\. R\. Hevner, S\. T\. March, J\. Park, and S\. Ram \(2004\)Design Science in Information Systems Research\.MIS quarterly,pp\. 75–105\(en\)\.Cited by:[§3\.1](https://arxiv.org/html/2605.06806#S3.SS1.p1.1)\. - A\. R\. Hevner \(2007\)A three cycle view of design science research\.Scandinavian journal of information systems19\(2\),pp\. 4\.Cited by:[§3\.1](https://arxiv.org/html/2605.06806#S3.SS1.p2.1)\. - G\. Hilal, T\. Hilal, and M\. Al\-Fawareh \(2024\)Misinformation and the demonization of human rights: the jordanian child rights law\.Cogent Education11\(1\),pp\. 2329417\.Cited by:[§1](https://arxiv.org/html/2605.06806#S1.p1.1)\. - S\. Hiltner, E\. Eaton, N\. Healy, A\. Scerri, J\. C\. Stephens, and G\. Supran \(2024\)Fossil fuel industry influence in higher education: a review and a research agenda\.Wiley Interdisciplinary Reviews: Climate Change15\(6\),pp\. e904\.Cited by:[§2\.1\.1](https://arxiv.org/html/2605.06806#S2.SS1.SSS1.p1.1)\. - R\.D\. Hurt, M\.E\. Muggli, and L\.B\. Becker \(2004\)Turning free speech into commercial speech: philip morris’ use of journalists to discredit the epa report on secondhand smoke\.Journal of Clinical Oncology22\(14\_suppl\),pp\. 6151–6151\.Cited by:[§2\.1\.2](https://arxiv.org/html/2605.06806#S2.SS1.SSS2.p1.1)\. - A\. N\. Institute \(2025\)Artificial Power: 2025 Landscape Report \- AI Now Institute — ainowinstitute\.org\.Note:[https://ainowinstitute\.org/publications/research/ai\-now\-2025\-landscape\-report](https://ainowinstitute.org/publications/research/ai-now-2025-landscape-report)\[Accessed 23\-12\-2025\]Cited by:[§1](https://arxiv.org/html/2605.06806#S1.p1.1),[§5\.1](https://arxiv.org/html/2605.06806#S5.SS1.p1.1)\. - T\. A\. L\. Institute and T\. A\. T\. Institute \(2025\)The ada lovelace institute and the alan turing institute, how do people feel about ai? wave two of a nationally representative survey of uk attitudes to ai\.The Ada Lovelace Institute\.Cited by:[§1](https://arxiv.org/html/2605.06806#S1.p1.1)\. - P\. R\. Kalluri, W\. Agnew, M\. Cheng, K\. Owens, L\. Soldaini, and A\. Birhane \(2025\)Computer\-vision research powers surveillance technology\.Nature,pp\. 1–7\.Cited by:[§1](https://arxiv.org/html/2605.06806#S1.p1.1)\. - S\. Krimsky \(1985\)The corporate capture of academic science and its social costs\.InGenetics and the Law III,pp\. 45–55\.Cited by:[§2\.1\.1](https://arxiv.org/html/2605.06806#S2.SS1.SSS1.p1.1)\. - S\. Krimsky \(2004\)Science in the private interest: has the lure of profits corrupted biomedical research?\.Bloomsbury Publishing PLC\.Cited by:[§2\.1\.1](https://arxiv.org/html/2605.06806#S2.SS1.SSS1.p1.1)\. - P\. Lachapelle, P\. Belmont, M\. Grasso, R\. McCann, D\. H\. Gouge, J\. Husch, C\. de Boer, D\. Molzbichler, and S\. Klain \(2024\)Academic capture in the anthropocene: a framework to assess climate action in higher education\.Climatic Change177\(3\),pp\. 40\.Cited by:[§2\.1\.1](https://arxiv.org/html/2605.06806#S2.SS1.SSS1.p1.1)\. - F\. Lancieri, L\. Edelson, and S\. Bechtold \(2024\)AI regulation: competition, arbitrage & regulatory capture\.Center for Law & Economics Working Paper Series11\.Cited by:[§2\.2](https://arxiv.org/html/2605.06806#S2.SS2.p2.1)\. - K\. Lee \(2025\)Engaging policymakers on the commercial determinants of health: lessons from global tobacco control\.External Links:[Link](https://www.publichealthontario.ca/-/media/Event-Presentations/25/07/engaging-policymakers-global-tobacco-control.pdf)Cited by:[§5\.1\.1](https://arxiv.org/html/2605.06806#S5.SS1.SSS1.p1.1)\. - O\. Letter \(2024\)External Links:[Link](https://euneedsai.com//)Cited by:[§1](https://arxiv.org/html/2605.06806#S1.p6.1)\. - D\. Levi\-Faur \(2011\)Handbook on the politics of regulation\.Edward Elgar Publishing\.Cited by:[§1](https://arxiv.org/html/2605.06806#S1.p3.1),[§2\.2](https://arxiv.org/html/2605.06806#S2.SS2.p1.1)\. - W\. Y\. Li \(2023\)Regulatory capture’s third face of power\.Socio\-Economic Review21\(2\),pp\. 1217–1245\.Cited by:[§2\.2](https://arxiv.org/html/2605.06806#S2.SS2.p1.1)\. - H\. Lin, J\. Lasser, S\. Lewandowsky, R\. Cole, A\. Gully, D\. G\. Rand, and G\. Pennycook \(2023\)High level of correspondence across different news domain quality rating sets\.PNAS Nexus2\(9\),pp\. pgad286\.External Links:ISSN 2752\-6542,[Link](https://doi.org/10.1093/pnasnexus/pgad286),[Document](https://dx.doi.org/10.1093/pnasnexus/pgad286)Cited by:[§3\.2](https://arxiv.org/html/2605.06806#S3.SS2.p3.1),[§7](https://arxiv.org/html/2605.06806#S7.p1.1)\. - LobbyFacts \(2026\)LobbyFacts \- exposing lobbying in the european institutions\.Note:[https://www\.lobbyfacts\.eu/](https://www.lobbyfacts.eu/)Cited by:[item 3](https://arxiv.org/html/2605.06806#S5.I2.i3.2)\. - S\. Loewenberg \(2008\)Drug company trials come under increasing scrutiny\.The Lancet371\(9608\),pp\. 191–192\.Cited by:[§1](https://arxiv.org/html/2605.06806#S1.p3.1)\. - P\. P\. Magazine \(2025\)Britain delays AI regulations to align with Trump’s policies — parliamentnews\.co\.uk\.Note:[https://parliamentnews\.co\.uk/britain\-delays\-ai\-regulations\-to\-align\-with\-trumps\-policies](https://parliamentnews.co.uk/britain-delays-ai-regulations-to-align-with-trumps-policies)\[Accessed 29\-12\-2025\]Cited by:[§1](https://arxiv.org/html/2605.06806#S1.p5.1)\. - D\. McQuillan \(2022\)Resisting AI: an anti\-fascist approach to artificial intelligence\.Bristol University Press\.External Links:ISBN 978\-1\-5292\-1349\-2 978\-1\-5292\-1350\-8Cited by:[§5\.1](https://arxiv.org/html/2605.06806#S5.SS1.p2.1)\. - 4\. Media \(2026\)External Links:[Link](https://www.404media.co/)Cited by:[item 4](https://arxiv.org/html/2605.06806#S5.I2.i4.2)\. - B\. Merchant \(2025\)Hundreds of workers mobilize to ’Stop Gen AI’ and help each other survive AI automation\.Blood in the Machine\(en\)\.External Links:[Link](https://www.bloodinthemachine.com/p/hundreds-of-workers-mobilize-to-stop)Cited by:[item 6](https://arxiv.org/html/2605.06806#S5.I2.i6)\. - D\. Michaels \(2008\)Doubt is their product: how industry’s assault on science threatens your health\.Oxford University Press\.Cited by:[§2\.1\.1](https://arxiv.org/html/2605.06806#S2.SS1.SSS1.p1.1)\. - G\. Miller \(2025\)Data centers have a political problem — and big tech wants to fix it\.Note:[https://www\.politico\.com/news/2025/12/17/data\-centers\-have\-a\-political\-problem\-and\-big\-tech\-wants\-to\-fix\-it\-00693695](https://www.politico.com/news/2025/12/17/data-centers-have-a-political-problem-and-big-tech-wants-to-fix-it-00693695)\[Accessed 27\-12\-2025\]Cited by:[§1](https://arxiv.org/html/2605.06806#S1.p6.1)\. - J\. M\. Morgan and D\. Duffy \(2019\)The cost of capture: how the pharmaceutical industry has corrupted policymakers and harmed patients\.The Roosevelt Institute\.Cited by:[§2\.2](https://arxiv.org/html/2605.06806#S2.SS2.p1.1)\. - A\. Moro and N\. Invernizzi \(2017\)The thalidomide tragedy: the struggle for victims’ rights and improved pharmaceutical regulation\.História, Ciências, Saúde\-Manguinhos24,pp\. 603–622\.Cited by:[§1](https://arxiv.org/html/2605.06806#S1.p3.1)\. - M\. Murgia \(2019\)AI academics under pressure to do commercial research\.Financial Times13\.Cited by:[§2\.1\.1](https://arxiv.org/html/2605.06806#S2.SS1.SSS1.p2.1)\. - G\. Muttitt \(2004\)Degrees of capture: universities, the oil industry and climate change\.New Economics Foundation\.Cited by:[§2\.1\.1](https://arxiv.org/html/2605.06806#S2.SS1.SSS1.p1.1)\. - M\. Newsroom \(2024\)Building AI Technology for Europeans in a Transparent and Responsible Way\.\(en\-US\)\.Note:\[Accessed 29\-12\-2025\]External Links:[Link](https://about.fb.com/news/2024/06/building-ai-technology-for-europeans-in-a-transparent-and-responsible-way/)Cited by:[§4\.3](https://arxiv.org/html/2605.06806#S4.SS3.p7.1)\. - M\. Nie \(2024\)Artificial intelligence: the biggest threat to democracy today?\.InProceedings of the aaai symposium series,Vol\.3,pp\. 376–379\.Cited by:[§1](https://arxiv.org/html/2605.06806#S1.p1.1)\. - D\. Noor \(2024\)Elite us universities rake in millions from big oil donations, research finds\.The Guardian\.Cited by:[§2\.1\.1](https://arxiv.org/html/2605.06806#S2.SS1.SSS1.p1.1)\. - C\. E\. Observatory \(2023\)I Challenge Thee — corporateeurope\.org\.Note:[https://corporateeurope\.org/en/2023/11/byte\-byte](https://corporateeurope.org/en/2023/11/byte-byte)\[Accessed 26\-12\-2025\]Cited by:[§1](https://arxiv.org/html/2605.06806#S1.p4.1)\. - C\. E\. Observatory \(2024\)I Challenge Thee — corporateeurope\.org\.Note:[https://corporateeurope\.org/en/2024/03/trojan\-horses\-how\-european\-startups\-teamed\-big\-tech\-gut\-ai\-act](https://corporateeurope.org/en/2024/03/trojan-horses-how-european-startups-teamed-big-tech-gut-ai-act)\[Accessed 26\-12\-2025\]Cited by:[§1](https://arxiv.org/html/2605.06806#S1.p4.1)\. - OECD \(2017\)Preventing policy capture: integrity in public decision making\.OECD Publications Centre\.Cited by:[§5\.1\.1](https://arxiv.org/html/2605.06806#S5.SS1.SSS1.p1.1)\. - P\. Offermann, S\. Blom, M\. Schönherr, and U\. Bub \(2010\)Artifact Types in Information Systems Design Science – A Literature Review\.InGlobal Perspectives on Design Science Research,R\. Winter, J\. L\. Zhao, and S\. Aier \(Eds\.\),Lecture Notes in Computer Science,Berlin, Heidelberg,pp\. 77–92\(en\)\.External Links:ISBN 978\-3\-642\-13335\-0,[Document](https://dx.doi.org/10.1007/978-3-642-13335-0%5F6)Cited by:[§3\.1](https://arxiv.org/html/2605.06806#S3.SS1.p1.1)\. - V\. Ojewale, R\. Steed, B\. Vecchione, A\. Birhane, and I\. D\. Raji \(2025\)Towards AI accountability infrastructure: gaps and opportunities in AI audit tooling\.InProceedings of the 2025 CHI Conference on Human Factors in Computing Systems,pp\. 1–29\.External Links:ISBN 979\-8\-4007\-1394\-1,[Link](https://dl.acm.org/doi/10.1145/3706598.3713301),[Document](https://dx.doi.org/10.1145/3706598.3713301)Cited by:[item 5](https://arxiv.org/html/2605.06806#S5.I2.i5.2)\. - S\. O\. Olanipekun \(2025\)Computational propaganda and misinformation: ai technologies as tools of media manipulation\.World Journal of Advanced Research and Reviews25\(1\),pp\. 911–923\.Cited by:[§1](https://arxiv.org/html/2605.06806#S1.p1.1)\. - P\. C\. on AI \(2026\)People’s consultation on ai in canada\.Note:[https://www\.peoplesaiconsultation\.ca/](https://www.peoplesaiconsultation.ca/)Cited by:[item 2](https://arxiv.org/html/2605.06806#S5.I2.i2.2)\. - I\. One \(2025\)Big Tech Cozies Up to New Administration After Spending Record Sums on Lobbying Last Year \- Issue One — issueone\.org\.Note:[https://issueone\.org/articles/big\-tech\-spent\-record\-sums\-on\-lobbying\-last\-year/](https://issueone.org/articles/big-tech-spent-record-sums-on-lobbying-last-year/)\[Accessed 26\-12\-2025\]Cited by:[§1](https://arxiv.org/html/2605.06806#S1.p4.1)\. - Y\. Oortwijn, T\. Ossenkoppele, and A\. Betti \(2021\)Interrater Disagreement Resolution: A Systematic Procedure to Reach Consensus in Annotation Tasks\.InProceedings of the Workshop on Human Evaluation of NLP Systems \(HumEval\),A\. Belz, S\. Agarwal, Y\. Graham, E\. Reiter, and A\. Shimorina \(Eds\.\),Online,pp\. 131–141\.Cited by:[§3\.3](https://arxiv.org/html/2605.06806#S3.SS3.p3.1)\. - D\. Orr \(2010\)Merchants of doubt: how a handful of scientists obscured the truth on issues from tobacco smoke to global warming\.\.Nature466\(7306\),pp\. 565–566\.Cited by:[§2\.1\.1](https://arxiv.org/html/2605.06806#S2.SS1.SSS1.p1.1)\. - H\. J\. Pandit, D\. A\.H\. Blankvoort, S\. Luccioni, and A\. Birhane \(2026\)Terms of \(ab\) use: an analysis of genai services\.arXiv preprint arXiv:2603\.18964\.Cited by:[§1](https://arxiv.org/html/2605.06806#S1.p1.1)\. - F\. Pasquale and H\. Sun \(2024\)Consent and compensation: resolving generative ai’s copyright crisis\.Va\. L\. Rev\. Online110,pp\. 207\.Cited by:[§1](https://arxiv.org/html/2605.06806#S1.p1.1)\. - W\. Post \(2025\)\.Note:[https://www\.washingtonpost\.com/investigations/2025/09/08/meta\-research\-child\-safety\-virtual\-reality/](https://www.washingtonpost.com/investigations/2025/09/08/meta-research-child-safety-virtual-reality/)\[Accessed 26\-12\-2025\]Cited by:[§1](https://arxiv.org/html/2605.06806#S1.p4.1)\. - J\. P\. Quintais \(2025\)Generative ai, copyright and the ai act\.Computer Law & Security Review56,pp\. 106107\.Cited by:[§1](https://arxiv.org/html/2605.06806#S1.p1.1)\. - L\. Reports \(2026\)External Links:[Link](https://www.lighthousereports.com/about/)Cited by:[item 4](https://arxiv.org/html/2605.06806#S5.I2.i4.2)\. - Reuters \(2015\)\.Note:[https://www\.reuters\.com/world/us/trump\-administration\-orders\-enhanced\-vetting\-applicants\-h\-1b\-visa\-2025\-12\-04/](https://www.reuters.com/world/us/trump-administration-orders-enhanced-vetting-applicants-h-1b-visa-2025-12-04/)\[Accessed 26\-12\-2025\]Cited by:[§1](https://arxiv.org/html/2605.06806#S1.p4.1)\. - N\. Robins\-Early and D\. Kerr \(2025\)Trump signs executive order aimed at preventing states from regulating ai\.The Guardian\.Cited by:[§1](https://arxiv.org/html/2605.06806#S1.p5.1)\. - N\. Robins\-Early \(2025\)How big tech is creating its own friendly media bubble to ‘win the narrative battle online’\.Cited by:[§2\.1\.2](https://arxiv.org/html/2605.06806#S2.SS1.SSS2.p2.3)\. - P\. Robles and D\. J\. Mallinson \(2025\)Artificial intelligence technology, public trust, and effective governance\.Review of Policy Research42\(1\),pp\. 11–28\.Cited by:[§1](https://arxiv.org/html/2605.06806#S1.p1.1)\. - C\. Rogers, M\. Ostarek, and S\. Nadel \(2025\)Strategies and tactics to curb the fossil fuel industry\.External Links:[Link](https://www.socialchangelab.org/tactics-curb-fossil-fuel-corporations)Cited by:[§5\.1\.1](https://arxiv.org/html/2605.06806#S5.SS1.SSS1.p2.1)\. - S\. Romano, R\. Angius, N\. Kerby, P\. Bouchaud, J\. Amidei, and A\. Kaltenbrunner \(2024\)A dataset to assess microsoft copilot answers in the context of swiss, bavarian and hessian elections\.18,pp\. 2040–2050\.External Links:ISSN 2334\-0770,[Link](https://ojs.aaai.org/index.php/ICWSM/article/view/31446),[Document](https://dx.doi.org/10.1609/icwsm.v18i1.31446)Cited by:[item 5](https://arxiv.org/html/2605.06806#S5.I2.i5.2)\. - J\. B\. Sass, B\. Castleman, and D\. Wallinga \(2005\)Vinyl chloride: a case study of data suppression and misrepresentation\.Environmental Health Perspectives113\(7\),pp\. 809–812\.Cited by:[§2\.1\.1](https://arxiv.org/html/2605.06806#S2.SS1.SSS1.p1.1)\. - E\. Savell, A\. B\. Gilmore, and G\. Fooks \(2014\)How does the tobacco industry attempt to influence marketing regulations? a systematic review\.PloS one9\(2\),pp\. e87389\.Cited by:[§2\.2](https://arxiv.org/html/2605.06806#S2.SS2.p1.1)\. - A\. Schiffrin \(2018\)Introduction to special issue on media capture\.Vol\.19,SAGE Publications Sage UK: London, England\.Cited by:[§2\.1\.2](https://arxiv.org/html/2605.06806#S2.SS1.SSS2.p1.1)\. - C\. Schyns \(2023\)The lobbying ghost in the machine\.Cited by:[§2\.2](https://arxiv.org/html/2605.06806#S2.SS2.p2.1)\. - S\. Shapiro \(2012\)The complexity of regulatory capture: diagnosis, causality and remediation\.Roger Williams University Law Review17\(1\)\.External Links:ISSN 1090\-3968Cited by:[§2\.2](https://arxiv.org/html/2605.06806#S2.SS2.p1.1),[§2](https://arxiv.org/html/2605.06806#S2.p1.1),[§5\.1](https://arxiv.org/html/2605.06806#S5.SS1.p2.1),[§5\.1](https://arxiv.org/html/2605.06806#S5.SS1.p6.1)\. - S\. Shead \(2021\)Amazon hit with $887 million fine by European privacy watchdog — cnbc\.com\.Note:[https://www\.cnbc\.com/2021/07/30/amazon\-hit\-with\-fine\-by\-eu\-privacy\-watchdog\-\.html](https://www.cnbc.com/2021/07/30/amazon-hit-with-fine-by-eu-privacy-watchdog-.html)\[Accessed 23\-12\-2025\]Cited by:[§1](https://arxiv.org/html/2605.06806#S1.p1.1)\. - A\. Shleifer \(2005\)Understanding regulation\.\.European Financial Management11\(4\)\.Cited by:[§1](https://arxiv.org/html/2605.06806#S1.p3.1)\. - Sifted \(2026\)Note:[https://sifted\.eu/articles/mistral\-helsing\-defence\-ai\-action\-summit\-paris/](https://sifted.eu/articles/mistral-helsing-defence-ai-action-summit-paris/)Cited by:[§3\.3](https://arxiv.org/html/2605.06806#S3.SS3.p5.1)\. - B\. Singh \(2025\)Epistemic destabilization: ai\-driven knowledge generation and the collapse of validation systems\.InProceedings of the AAAI/ACM Conference on AI, Ethics, and Society,Vol\.8,pp\. 2387–2398\.Cited by:[§1](https://arxiv.org/html/2605.06806#S1.p1.1)\. - I\. Solaiman, Z\. Talat, W\. Agnew, L\. Ahmad, D\. Baker, S\. L\. Blodgett, C\. Chen, H\. Daumé III, J\. Dodge, I\. Duan,et al\.\(2023\)Evaluating the social impact of generative ai systems in systems and society\.arXiv preprint arXiv:2306\.05949\.Cited by:[§1](https://arxiv.org/html/2605.06806#S1.p1.1)\. - J\. E\. Stiglitz \(2017\)Toward a taxonomy of media capture\.Cited by:[§2\.1\.2](https://arxiv.org/html/2605.06806#S2.SS1.SSS2.p1.1)\. - M\. Taft \(2024\)How oil companies manipulate journalists\.Drilled\.Cited by:[§2\.1\.2](https://arxiv.org/html/2605.06806#S2.SS1.SSS2.p1.1)\. - E\. Tan \(2025\)Silicon Valley Pledges $200 Million to New Pro\-A\.I\. Super PACs — nytimes\.com\.Note:[https://www\.nytimes\.com/2025/08/26/technology/silicon\-valley\-ai\-super\-pacs\.html](https://www.nytimes.com/2025/08/26/technology/silicon-valley-ai-super-pacs.html)\[Accessed 27\-12\-2025\]Cited by:[§1](https://arxiv.org/html/2605.06806#S1.p6.1)\. - J\. Tanner and J\. Bryden \(2023\)Reframing ai in civil society: beyond risk & regulation\.Note:[https://rootcause\.global/framing\-ai/](https://rootcause.global/framing-ai/)Cited by:[§2\.1\.2](https://arxiv.org/html/2605.06806#S2.SS1.SSS2.p2.3)\. - The American Lung Association \(2025\)“State of tobacco control” 2025: tobacco industry’s aggressive actions to protect its profits slows proven policies to prevent and reduce tobacco use\.External Links:[Link](https://www.lung.org/content/sotc/2025/ala-state-of-tobacco-control-2025.pdf)Cited by:[§5\.1\.1](https://arxiv.org/html/2605.06806#S5.SS1.SSS1.p1.1)\. - L\. Thomas\-Walters, E\. G\. Scheuch, A\. Ong, and M\. H\. Goldberg \(2025\)The impacts of climate activism\.Current Opinion in Behavioral Sciences63,pp\. 101498\.Cited by:[§5\.1\.1](https://arxiv.org/html/2605.06806#S5.SS1.SSS1.p2.1)\. - A\. Tyson, G\. Pasquini, A\. Spencer, and C\. Funk \(2023\)60% of americans would be uncomfortable with provider relying on ai in their own health care\.Cited by:[§1](https://arxiv.org/html/2605.06806#S1.p1.1)\. - F\. Van Der Vlist, A\. Helmond, and F\. Ferrari \(2024\)Big ai: cloud infrastructure dependence and the industrialisation of artificial intelligence\.Big Data & Society11\(1\),pp\. 20539517241232630\.Cited by:[footnote 1](https://arxiv.org/html/2605.06806#footnote1)\. - L\. Vertinsky \(2021\)Pharmaceutical \(re\) capture\.Yale J\. Health Pol’y L\. & Ethics20,pp\. 146\.Cited by:[§2\.2](https://arxiv.org/html/2605.06806#S2.SS2.p1.1)\. - J\. vom Brocke, A\. Hevner, and A\. Maedche \(2020\)Introduction to Design Science Research\.InDesign Science Research\. Cases,J\. vom Brocke, A\. Hevner, and A\. Maedche \(Eds\.\),pp\. 1–13\(en\)\.External Links:ISBN 978\-3\-030\-46781\-4,[Link](https://doi.org/10.1007/978-3-030-46781-4_1),[Document](https://dx.doi.org/10.1007/978-3-030-46781-4%5F1)Cited by:[§3\.1](https://arxiv.org/html/2605.06806#S3.SS1.p1.1)\. - K\. Wei, C\. Ezell, N\. Gabrieli, and C\. Deshpande \(2024\)How do ai companies “fine\-tune” policy? examining regulatory capture in ai governance\.InProceedings of the AAAI/ACM Conference on AI, Ethics, and Society,Vol\.7,pp\. 1539–1555\.Cited by:[§2\.2](https://arxiv.org/html/2605.06806#S2.SS2.p2.1)\. - H\. Weigand, P\. Johannesson, and B\. Andersson \(2021\)An artifact ontology for design science research\.Data & Knowledge Engineering133,pp\. 101878\.Note:Publisher: ElsevierCited by:[§3\.1](https://arxiv.org/html/2605.06806#S3.SS1.p1.1)\. - A\. Wellstead and M\. Howlett \(2026\)Capacity for what? elon musk’s DOGE, public value destruction and the darkside of policy capacity\.pp\. 1–14\.External Links:ISSN 1727\-2645,[Link](https://doi.org/10.1108/PAP-12-2024-0200),[Document](https://dx.doi.org/10.1108/PAP-12-2024-0200)Cited by:[§5\.1](https://arxiv.org/html/2605.06806#S5.SS1.p6.1)\. - C\. R\. Whittaker \(2025\)External Links:[Link](https://techcrunch.com/2025/02/18/elon-musk-staffer-created-a-doge-ai-assistant-for-making-government-less-dumb/)Cited by:[§5\.1](https://arxiv.org/html/2605.06806#S5.SS1.p6.1)\. - M\. Whittaker \(2021\)The steep cost of capture\.Interactions28\(6\),pp\. 50–55\.Cited by:[§1](https://arxiv.org/html/2605.06806#S1.p7.1),[§2\.1\.1](https://arxiv.org/html/2605.06806#S2.SS1.SSS1.p2.1),[§5\.1\.2](https://arxiv.org/html/2605.06806#S5.SS1.SSS2.p2.1)\. - M\. Young, M\. Katell, and P\. Krafft \(2022\)Confronting power and corporate capture at the facct conference\.InProceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency,pp\. 1375–1386\.Cited by:[§1](https://arxiv.org/html/2605.06806#S1.p7.1),[§2\.1\.1](https://arxiv.org/html/2605.06806#S2.SS1.SSS1.p3.1)\. ## Appendix ATaxonomy and Annotation Template ### A\.1\.Taxonomy of Mechanisms of Capture and corresponding descriptions 1. \(1\)Direct influence on policy:Mechanisms whose aim is to influence the position or decisions of public officials and regulations 1. \(a\)Lobbying:The communication made by a person or organisation regarding themselves or on behalf on another entity to a public official a stakeholder with the goal of influencing decisions or creating a favourable position for themselves 2. \(b\)Private meetings with regulators \(outside lobbying\):Meetings and communications that occur directly between a person or organisation and public officials outside of channels regulated under lobbying transparency laws 3. \(c\)Political contributions:Contributions by a person or organisation to a political entity or person 4. \(d\)Economic coercion of government:The use of potential economic advantage or disadvantage as a threat to induce changes in decisions or positions of the government 2. \(2\)Conflicting involvement:Mechanisms that represent inherent conflicts due to the involvement of an entity in a specific position 1. \(a\)Revolving door:The pattern of governmental or public officials taking up positions or roles in private entities in areas which they regulated or had authority over, as well as the inverse where public officials are appointed from private entities that were the subject of enforcement and regulation 2. \(b\)Direct involvement in rule\-making:Involvement of private industry, governmental, or other entities in the process of developing law or policy without a clearly defined legal mandate or authorisation to do so 3. \(c\)Ownership/Stake in company:Public officials taking an ownership or other forms of stocks or stake in an organisation that is being regulated or overseen by the office they are a part of 3. \(3\)Market influence:Mechanisms that involve participation, co\-operation, or coercion of market actors 1. \(a\)Standard setting in consortia:Utilising favourable consortiums to develop, set, and dictate standards or pactices without the involvement of other relevant stakeholders 2. \(b\)Corralling SMEs/orgs to oppose:Imploring, organising, or using small and medium\-sized enterprises \(SMEs\) or their representative organisations to oppose a policy or position 3. \(c\)Standard setting via monopoly:Using a position of monopoly or dominance to dictate practices 4. \(d\)Economic coercion of competition:The use of potential economic advantage or disadvantage as a threat to induce changes in decisions or practices of competitors 4. \(4\)Elusion of law: Mechanisms that are directly or indirectly against the spirit or the letter of the law 1. \(a\)Disregard existing laws: Practices that disregard the requirements or process established in laws 2. \(b\)Misinterpret laws:Using an alternative or intentional misrepresentation of the law as a way to deflect or avoid its requirements or implications 3. \(c\)Relocation of development and labour: Moving labour deployments or setups from one location to another 4. \(d\)Exploit weak regulations/jurisdictions: Moving organisations, procurements, or deployments to jurisdictions that have considerably weaker regulations or enforcement 5. \(e\)Retaliation against whistleblowers & authorities: Acting against whistleblowers or authorities directly or indirectly, or taking actions that lead to negative consequences or suppress the voice or actions of whistle blowers 6. \(f\)Bribery:The solicitation, payment, or use of favour in exchange for official actions or decisions 5. \(5\)Epistemic & discourse influence:Mechanisms that create or use narratives as a way to promote a specific practice or opinion 1. \(a\)Corporate sponsorship of events:Utilising sponsorship of events to promote the organisation or imply its support to specific principles or topics 2. \(b\)Funding and sponsorship of research & education:Direct or indirect funding and sponsorship of research conducted outside the organisation, typically in educational and academic sectors 3. \(c\)Public facing campaign:Use of media or traditional media or other forms of public facing media to advertise, promote, or create a narrative against a specific person, topic, or action and/or promote a positive and misleading narrative about a company, AI product or practice 4. \(d\)Hyping technologies:Promoting or implying capabilities of technologies on the basis of hypothetical speculations and without empirical evidence or theoretical and conceptual rigour 5. \(e\)Playing victim:Industry representatives complaining via public campaign, directly to regulators or via other means that they have been subject to unfair rules 6. \(f\)Undermining risks/harms:Disregarding, diminishing, hiding, or otherwise reducing harms or the potential for risks or harms associated with AI technologies 7. \(g\)Speculative studies:Use of studies, analysis, or reports that do not follow standard scientific or conceptual rigour 8. \(h\)Ethics washing:Companies or corporations imposing as the guardians of ethics, transparency, accountability, and safety without meaningful actions or implementations to support the claims or while engaging in practices that undermine these principles 9. \(i\)Government adopting industry framing:Governments, public bodies, or public officials adopting, repeating, or only considering framings, positions, or approaches favoured by industry while ignoring, refusing, or diminishing other approaches 10. \(j\)Conflation of public and private interest:Irrationally mixing or implying private interests as being public benefits even though they specifically benefit non\-public actors without a corresponding benefit to the public ### A\.2\.Annotation template 1. \(1\)Article ID:a unique identifier for the article 2. \(2\)Title:the title as used in the article 3. \(3\)Published:date of publication of article 4. \(4\)Annotator:the person who performed the annotation 5. \(5\)Duplicate of:ID of article for which this is a duplicate 6. \(6\)Mechanism Category:applicable mechanisms from the taxonomy \(see Appendix\.[A\.1](https://arxiv.org/html/2605.06806#A1.SS1)\. If none is applicable, the additionally provided optionNo captureis selected 7. \(7\)Mechanism:description of the mechanism as it appears or is evident in the article 8. \(8\)Evidence:excerpts from the article text which exhibit or describe the mechanism or its effects 9. \(9\)Narratives used:identified narratives from the article 10. \(10\)Notes:space for annotators to capture relevant information, perspectives, or questions for discussion 11. \(11\)Consensus:mechanisms selected after discussion between pair of annotators in subsequent stages 12. \(12\)Out of scope:indicator to signal this article is out of scope
Similar Articles
Moving AI governance forward
OpenAI publishes AI governance recommendations committing companies to internal and external red-teaming for safety risks, information sharing on emerging capabilities, and mechanisms for detecting AI-generated audio and visual content.
Frontier AI regulation: Managing emerging risks to public safety
OpenAI proposes a regulatory framework for 'frontier AI' models that pose potential public safety risks, advocating for standard-setting processes, registration/reporting requirements, and compliance mechanisms including pre-deployment risk assessments and post-deployment monitoring.
Why responsible AI development needs cooperation on safety
OpenAI publishes a policy research paper identifying four strategies to improve industry cooperation on AI safety norms: communicating risks/benefits, technical collaboration, increased transparency, and incentivizing standards. The analysis addresses how competitive pressures could lead to under-investment in safety and proposes mechanisms to align incentives toward safe AI development.
Is anyone actually enforcing AI governance, or just writing policies?
The article discusses the gap between documented AI governance policies and the practical enforcement of these rules within runtime AI agent workflows.
AI May Reshape Institutions More Than It Replaces Jobs
The article argues that the next major AI debate should focus on representation and institutional architecture, proposing three layers (Sense, Core, Driver) to address how AI systems capture reality, reason, and act legitimately, rather than just model intelligence.