Exploring the "Banality" of Deception in Generative AI

Reddit r/ArtificialInteligence Papers

Summary

This position paper explores 'banal deception' in generative AI, arguing that subtle manipulation is becoming normalized in chatbot interactions and requires new safeguards.

No content available
Original Article Export to Word Export to PDF
View Cached Full Text

Cached at: 05/13/26, 10:20 AM

# Exploring the “Banality” of Deception in Generative AI
Source: [https://arxiv.org/html/2605.07012](https://arxiv.org/html/2605.07012)
,Johanna GunawanMaastricht UniversityMaastrichtNetherlandsandKonrad KollnigMaastricht UniversityMaastrichtNetherlands

###### Abstract\.

Current approaches to addressing deceptive design largely focus on visible interface manipulations, commonly referred to as “dark patterns\.” With the rise of generative AI, deception is becoming more difficult to spot and easier to live with, as it is quietly embedded in default settings, automated suggestions, and conversational interactions rather than discrete interface elements\. These subtle, normalised forms of influence, which Simone Natale\(Natale,[2021](https://arxiv.org/html/2605.07012#bib.bib115)\)frames as “banal deception”, shape everyday digital use and blur the line between AI\-enabled assistance and manipulation\.

This position paper exploresbanalityas a lens through which to reason through deception in generative AI experiences, especially with chatbots\. We explore whatNatale \([2025](https://arxiv.org/html/2605.07012#bib.bib84)\)describes as users’ own involvement in their deception, and argue that this perspective could lead to future work for introducing friction to safeguard users from deception in generative AI interactions, such as empowering users through raising awareness, providing them with intervention tools, and regulatory or enforcement improvements\. We present these concepts as points for discussion for the deceptive design scholarly community\.

banal deception, LLMs, GenAI, AI deception, co\-produced deception

††copyright:acmlicensed††conference:CHI Conference on Human Factors in Computing Systems; April 13\-17, 2026; Barcelona, Spain††ccs:Social and professional topics Computing / technology policy††ccs:Human\-centered computing HCI design and evaluation methods††ccs:Human\-centered computing Interaction paradigms## 1\.Introduction and Position

Over the past decade, Human\-Computer Interaction \(HCI\) scholarship and EU digital governance have focused on deceptive design tricks, often called“dark patterns”, which lead users to make decisions that are not in accordance with their intent\(Brignull,[2011](https://arxiv.org/html/2605.07012#bib.bib65); Grayet al\.,[2024](https://arxiv.org/html/2605.07012#bib.bib78)\)\. These patterns can undermine user autonomy by leading individuals toward choices that may not align with their initial intentions through interface tricks\. This work on dark patterns has been instrumental in identifying how interface designs can trick users towards outcomes that benefit platforms over users’ intentions\. The harms arising from these are focused on, for example, financial loss, harms to one’s privacy, or coerced consent\(Brignull,[2011](https://arxiv.org/html/2605.07012#bib.bib65); Mathuret al\.,[2019](https://arxiv.org/html/2605.07012#bib.bib77)\)\.

With the rapid adoption of generative AI \(genAI\) chatbots and the market dominance of GPT\-based LLM chatbots developed by OpenAI, the nature of digital deception has changed significantly\. Unlike many previously studied interface\-based dark patterns, genAI chatbots are characterised by comparatively seamless usability \(by design\) that requires little to no training to start using them\. That is, users only need to know how to type and how to hold a conversation to begin using genAI chatbots\. These systems are accessed through familiar and well\-established digital infrastructures, such as pulling up a webpage in a browser or launching an app on their mobile device, allowing users to rely on competencies and materials they already possess rather than learning new interaction methods or acquiring specialised hardware such as VR\. Similarly, the deep integration of genAI and LLMs into already existing corporate software ecosystems helps accelerate their adoption by forcing user awareness\(Rogers,[2003](https://arxiv.org/html/2605.07012#bib.bib112)\), making chatbots nearly ubiquitous at a rate faster than other emerging technologies such as virtual or mixed reality, which have consistently encountered obstacles of their cost, hardware, and embodied interaction requirements\(Wrzuset al\.,[2024](https://arxiv.org/html/2605.07012#bib.bib132); Radiantiet al\.,[2020](https://arxiv.org/html/2605.07012#bib.bib113)\)\.

As a result, genAI chatbots move deception away from visible interface manipulation and into methods embedded in regular, everyday interactions\. This rapid integration of ubiquitous technology poses risks ranging from short\-term fraud and election tampering to long\-term social\-engineering attacks through the creation of realistic content and automated attack infrastructure\(Schmitt and Flechais,[2024](https://arxiv.org/html/2605.07012#bib.bib125)\)\. This position paper uses Simone Natale\(Natale,[2021](https://arxiv.org/html/2605.07012#bib.bib115)\)’s concept of “banality” in deception as a lens to reason through deception within genAI contexts, with the aim of fostering further discussion at this workshop\. We discuss this banality further in[§ 2](https://arxiv.org/html/2605.07012#S2), expand on Natale’s discussion of the role of users in banal deception in[§ 3](https://arxiv.org/html/2605.07012#S3), and finally consider how these concepts might inform future work that empowers and protects users in[§ 4](https://arxiv.org/html/2605.07012#S4)\.

## 2\.The Banality of AI\-Enabled Deception

Prior dark pattern research has established a broad ontology of dark pattern types\(Grayet al\.,[2024](https://arxiv.org/html/2605.07012#bib.bib78)\)and their harms\(Mathuret al\.,[2021](https://arxiv.org/html/2605.07012#bib.bib123)\)\(the latter of which may include financial and privacy losses, as well as cognitive burdens, psychological distress, and violations of user agency\)\. Some dark patterns might be more prominent, obvious, or plainly visible, whereas others are more subtle, quiet, or operate without visual cues\. The conversational structure of genAI chatbot interactions introduces the potential for a different class of harm through conversational mechanisms: cumulative, psychological, and longitudinal effects that emerge through routine interaction rather than discrete acts of manipulation\. We link this toNatale \([2025](https://arxiv.org/html/2605.07012#bib.bib84)\)’s concept of “banal deception”, which articulates that deceptive mechanisms are embedded in the functioning of a technology itself\. Natale also describes this embedding from the user perspective, noting that users “actively exploit their own capacity to fall into deception”\(Natale,[2021](https://arxiv.org/html/2605.07012#bib.bib115)\)\.

By minimising friction and mimicking natural human conversation, these interfaces achieve the mundane, ordinary status that Natale\(Natale,[2025](https://arxiv.org/html/2605.07012#bib.bib84)\)considers a prerequisite for banal deception\. The rapid evolution of these technologies implies that the technology, the user, and the interplay between both are subject to permanent change\(Peteret al\.,[2024](https://arxiv.org/html/2605.07012#bib.bib129)\)\. The more technology disappears into the background of daily life, the more likely the user is to overlook its underlying architecture and the fact that it is still a machine even when it uses natural language\(Natale,[2025](https://arxiv.org/html/2605.07012#bib.bib84); Guzman,[2019](https://arxiv.org/html/2605.07012#bib.bib114)\)\.

In this context, AI’s ease of use may be seen as not just as a feature of accessibility but as the very mechanism that makes deception invisible\. Banal deception can skirt user awareness and current design and legal standards due to its common yet discreet nature\. That is, harmless appearances can, in turn, affect users’ beliefs, long\-term decisions, and digital trust\. This mimics the historical trajectory of AI, which, since the 1950s, has explored how humans are “programmed to be deceived” by exploiting the limits of our perception and psychology, as Natale\(Natale,[2021](https://arxiv.org/html/2605.07012#bib.bib115)\)notes\. It hides in the perceived helpfulness of a linguistic nudge or a hyper\-personalised default\. This is compounded by the generative models designed for maximum usage and friendliness\. By optimising for banal objectives such as friendliness and ease of use, a medium is created in which misleading dynamics become inherent to the system’s operation\(Natale,[2021](https://arxiv.org/html/2605.07012#bib.bib115)\)\.

The Banality Lens in Ongoing Legal Discussions\.The urgency of recognising banal harms in generative AI is illustrated by the ongoingRaine v OpenAIcase\(Allyn,[2025](https://arxiv.org/html/2605.07012#bib.bib122); Raine and Raine,[2025](https://arxiv.org/html/2605.07012#bib.bib124)\)– about a young teenager, Adam Raine, committing suicide\. In the suit, Raine’s parents contend that ChatGPT gradually became his primary source of companionship and engaged with his suicidal ideation in ways that reinforced emotional dependency, provided detailed information related to suicide methods, creating a dependency loop that discouraged seeking real\-world help, such as reaching out to family or professionals\(Raine and Raine,[2025](https://arxiv.org/html/2605.07012#bib.bib124)\)\. Rather than using visible persuasive tricks, the chatbot’s empathetic conversational style and engaging behaviour are designed to feel supportive and natural, which may contribute to the perceived normalisation of harmful interaction patterns\. These harms could be considered banal, since chatbots are not explicitly designed to be “evil” but rather to be agreeable, helpful, and engaging\. Deception then occurs as the user begins to treat the AI as an emotional partner, while the AI, lacking true consciousness or ethical agency, simply reflects the user’s own downward spiral to them\. Users who become accustomed to these ongoing interactions as a result of this high level of usability may, over time, come to resemble “prisoners of \[their\] own device\.” The caseRaine v\. OpenAIraises a more concerning question, that users might exploit a tool’s capabilities and become active participants in their own deception\(Natale,[2025](https://arxiv.org/html/2605.07012#bib.bib84)\)\. After all, children are less resilient against this capacity and in their ability to be in control of their actions\.

## 3\.Exploring Users’ “Own Capacity” in Deception for Future Empowerment

In[§ 2](https://arxiv.org/html/2605.07012#S2), we discuss how concepts of banality argue that users are not merely passive actors but active participants in their deception\(Natale,[2025](https://arxiv.org/html/2605.07012#bib.bib84)\)\. This co\-production could be driven by a psychological tendency toward anthropomorphism, in which users fill in the gaps of an interface with their own social expectations\. When an LLM is designed for extreme usability, it could leverage these tendencies, creating a feedback loop where the user validates the machine’s human\-like performance to maintain conversational flow\.

The manner in which design elements may bring users into their own deception can vary\. For example, empirical work by Zhan et al\.\(Zhanet al\.,[2025](https://arxiv.org/html/2605.07012#bib.bib87)\)found that LLMs exploit a truth\-default state\. Their study found that over\-simplified responses \(53\.64%\) are the most frequent deceptive behaviours\. By mimicking human\-like cues, such as “typing dots” or empathetic phrasing, the AI keeps the user in a state of reflexive thinking\.

Similarly, scholars and engineers are increasingly discussing the sycophantic nature of generative models and LLMs in particular and their tendency to mirror users’ agreeableness over factual accuracy to maximise human preference\(Sharmaet al\.,[2023](https://arxiv.org/html/2605.07012#bib.bib121); Perezet al\.,[2022](https://arxiv.org/html/2605.07012#bib.bib130); Weiet al\.,[2024](https://arxiv.org/html/2605.07012#bib.bib131); Goedecke,[2025](https://arxiv.org/html/2605.07012#bib.bib133)\)\.

Industry standards and heuristics often describe favouring simplicity and minimising friction; Apple’s design practices, for instance, are intended to be as easy\-to\-use as possible\(Kollnig,[2026](https://arxiv.org/html/2605.07012#bib.bib134), p\.122\)\. These are noble goals; an interface \(or chatbot\) that is clunky, combative, or uncomfortable to use makes for an unpleasant experience, which is generally undesirable\.

In the effort toreducedeception and resultant harms, it may seem counter\-intuitive or even harsh to imply that users are complicit in their own manipulation\. However, acknowledging any level of contribution – conscious or otherwise – to banally deceptive designs may in fact reveal opportunities for users to take back their autonomy and withdraw from this participation\. If AI might be an accomplice to the proliferation of deceptive designs, and if users may also be, then we as users may be able to take back some control either through explicit awareness of banal deception or by collectively working to change this paradigm\. With LLM chatbot users themselves being one half of a two\-sided conversation, acknowledging their active participation may reveal more ways to mitigate the resulting harms\.

The lens of banal deception articulates this paradox, in which the human\-centered, highly usable social cues used to assist the user are the same cues used to deceive them\. At what point does an ordinary act of assistance become an act of deception? The same questions arise when considering how the high level of usability of chatbots and generative modelsby designcontributes to potential deception\. As such, the concept of banality may help describe why it is so difficult to draw the line between AI that assists and AI that deceives when both use the same means\.

## 4\.Leveraging “Banal” Deception for User Autonomy

If taking banal deception as “co\-produced” by a model’s training and a user’s own projection, making it difficult to assign responsibility, the concept relates to what Matthias\(Matthias,[2004](https://arxiv.org/html/2605.07012#bib.bib119)\)describes as a “responsibility gap”: a state where the emergent behaviour of a learning system outpaces traditional liability\. If we accept the premise that, through participation and use of a system, users participate in their own deception, how can we use this participation to strengthen user autonomy against deception?

Moreover, if we take the concept of co\-production into account, this may change how we empower users to take back control over their use of banally deceitful LLM tools and to assert individual responsibility\. Awareness, education, and community\-driven intervention tools may be part of the solution, which we consider an area for future work\.

Accountability research might therefore include the development of “distributed responsibility” models that account for the emergent, co\-produced nature of AI\-driven deception, along with determining how meaningful human oversight can be maintained\. That is, future research acknowledging Natale’s\(Natale,[2021](https://arxiv.org/html/2605.07012#bib.bib115)\)notion of users’ “actively \[exploiting\] their own capacity \[for\] deception” could draw on this perspective to build tools, provide education, or otherwise contribute to end\-user empowerment\. Our team is conducting ongoing work in the area of intervention tools for end users, as well as exploring potential mitigations from a co\-production angle\.

To facilitate this empowerment, future work should also support the development of enforcement tools that can detect banal deception as it occurs – and be useful to regulators and enforcers alike in combating end users’ deception\. Since AI technologies evolve faster than policy, we propose a shift toward audits that monitor longitudinal interactions rather than static screenshots\. Developers, cognitive and social psychologists, and policymakers must work together to design metrics that can reliably flag when a user is falling into a dependency loop as seen inRaine v\. OpenAI, before the harm becomes irreversible\.

## 5\.Conclusion

This paper explores deception in generative AI through the lens of “banality” to discuss subtle, normalised forms of influence embedded in everyday interaction\. Drawing onNatale \([2021](https://arxiv.org/html/2605.07012#bib.bib115)\)’s work, we highlight the ordinary, mundane potential of deception and users’ potential roles in their own deception, taking the position that the banality lens may contribute to dark patterns and deceptive design scholarship\. We bring Natale’s concept of banality to this workshop with the aim of encouraging discussion and reflection on the nature of AI\-enabled deception\.

## References

- B\. Allyn \(2025\)OpenAI and ceo sam altman sued by parents who blame chatgpt for teen’s death\.Note:CNN BusinessAccessed: 2026\-02\-07External Links:[Link](https://edition.cnn.com/2025/08/26/tech/openai-chatgpt-teen-suicide-lawsuit)Cited by:[§2](https://arxiv.org/html/2605.07012#S2.p4.1)\.
- H\. Brignull \(2011\)Dark patterns: deception vs\. honesty in ui design\.Interaction Design, Usability338,pp\. 2–4\.Cited by:[§1](https://arxiv.org/html/2605.07012#S1.p1.1)\.
- S\. Goedecke \(2025\)Sycophancy is the first llm “dark pattern”\.External Links:[Link](https://www.seangoedecke.com/ai-sycophancy/)Cited by:[§3](https://arxiv.org/html/2605.07012#S3.p3.1)\.
- C\. M\. Gray, C\. T\. Santos, N\. Bielova, and T\. Mildner \(2024\)An ontology of dark patterns knowledge: foundations, definitions, and a pathway for shared knowledge\-building\.InProceedings of the 2024 CHI Conference on Human Factors in Computing Systems,CHI ’24,New York, NY, USA\.External Links:ISBN 9798400703300,[Link](https://doi.org/10.1145/3613904.3642436),[Document](https://dx.doi.org/10.1145/3613904.3642436)Cited by:[§1](https://arxiv.org/html/2605.07012#S1.p1.1),[§2](https://arxiv.org/html/2605.07012#S2.p1.1)\.
- A\. L\. Guzman \(2019\)Voices in and of the machine: source orientation toward mobile virtual assistants\.Computers in Human Behavior90,pp\. 343–350\.External Links:ISSN 0747\-5632,[Document](https://dx.doi.org/https%3A//doi.org/10.1016/j.chb.2018.08.009),[Link](https://www.sciencedirect.com/science/article/pii/S0747563218303844)Cited by:[§2](https://arxiv.org/html/2605.07012#S2.p2.1)\.
- K\. Kollnig \(2026\)The app economy: making sense of platform power in the age of ai\.Bristol University Press,Bristol, UK\.External Links:ISBN 9781529247725,[Document](https://dx.doi.org/10.51952/9781529247725),[Link](https://bristoluniversitypressdigital.com/view/book/9781529247725/9781529247725.xml)Cited by:[§3](https://arxiv.org/html/2605.07012#S3.p4.1)\.
- A\. Mathur, G\. Acar, M\. J\. Friedman, E\. Lucherini, J\. Mayer, M\. Chetty, and A\. Narayanan \(2019\)Dark patterns at scale: findings from a crawl of 11k shopping websites\.Proc\. ACM Hum\.\-Comput\. Interact\.3\(CSCW\)\.External Links:[Link](https://doi.org/10.1145/3359183),[Document](https://dx.doi.org/10.1145/3359183)Cited by:[§1](https://arxiv.org/html/2605.07012#S1.p1.1)\.
- A\. Mathur, M\. Kshirsagar, and J\. Mayer \(2021\)What makes a dark pattern… dark? design attributes, normative considerations, and measurement methods\.InProceedings of the 2021 CHI Conference on Human Factors in Computing Systems,CHI ’21,New York, NY, USA\.External Links:ISBN 9781450380966,[Link](https://doi.org/10.1145/3411764.3445610),[Document](https://dx.doi.org/10.1145/3411764.3445610)Cited by:[§2](https://arxiv.org/html/2605.07012#S2.p1.1)\.
- A\. Matthias \(2004\)The responsibility gap: ascribing responsibility for the actions of learning automata\.Ethics and information technology6\(3\),pp\. 175–183\.Cited by:[§4](https://arxiv.org/html/2605.07012#S4.p1.1)\.
- S\. Natale \(2021\)Deceitful media: artificial intelligence and social life after the turing test\.Oxford University Press\.Cited by:[§1](https://arxiv.org/html/2605.07012#S1.p3.1),[§2](https://arxiv.org/html/2605.07012#S2.p1.1),[§2](https://arxiv.org/html/2605.07012#S2.p3.1),[§4](https://arxiv.org/html/2605.07012#S4.p3.1),[§5](https://arxiv.org/html/2605.07012#S5.p1.1)\.
- S\. Natale \(2025\)Digital media and the banalization of deception\.Convergence31\(1\),pp\. 402–419\.External Links:[Document](https://dx.doi.org/10.1177/13548565241311780),[Link](https://doi.org/10.1177/13548565241311780)Cited by:[§2](https://arxiv.org/html/2605.07012#S2.p1.1),[§2](https://arxiv.org/html/2605.07012#S2.p2.1),[§2](https://arxiv.org/html/2605.07012#S2.p4.1),[§3](https://arxiv.org/html/2605.07012#S3.p1.1)\.
- E\. Perez, S\. Ringer, K\. Lukošiūtė, K\. Nguyen, E\. Chen, S\. Heiner, C\. Pettit, C\. Olsson, S\. Kundu, S\. Kadavath, A\. Jones, A\. Chen, B\. Mann, B\. Israel, B\. Seethor, C\. McKinnon, C\. Olah, D\. Yan, D\. Amodei, D\. Amodei, D\. Drain, D\. Li, E\. Tran\-Johnson, G\. Khundadze, J\. Kernion, J\. Landis, J\. Kerr, J\. Mueller, J\. Hyun, J\. Landau, K\. Ndousse, L\. Goldberg, L\. Lovitt, M\. Lucas, M\. Sellitto, M\. Zhang, N\. Kingsland, N\. Elhage, N\. Joseph, N\. Mercado, N\. DasSarma, O\. Rausch, R\. Larson, S\. McCandlish, S\. Johnston, S\. Kravec, S\. E\. Showk, T\. Lanham, T\. Telleen\-Lawton, T\. Brown, T\. Henighan, T\. Hume, Y\. Bai, Z\. Hatfield\-Dodds, J\. Clark, S\. R\. Bowman, A\. Askell, R\. Grosse, D\. Hernandez, D\. Ganguli, E\. Hubinger, N\. Schiefer, and J\. Kaplan \(2022\)Discovering language model behaviors with model\-written evaluations\.External Links:2212\.09251,[Link](https://arxiv.org/abs/2212.09251)Cited by:[§3](https://arxiv.org/html/2605.07012#S3.p3.1)\.
- J\. Peter, T\. Araujo, C\. Ischen, S\. J\. Shaikh, M\. J\. van der Goot, and C\. L\. van Straten \(2024\)Human–machine communication\.InCommunication Research into the Digital Society: Fundamental Insights from the Amsterdam School of Communication Research,pp\. 205–220\.External Links:ISBN 9789048560592,[Link](http://www.jstor.org/stable/jj.11895525.15)Cited by:[§2](https://arxiv.org/html/2605.07012#S2.p2.1)\.
- J\. Radianti, T\. A\. Majchrzak, J\. Fromm, and I\. Wohlgenannt \(2020\)A systematic review of immersive virtual reality applications for higher education: design elements, lessons learned, and research agenda\.Computers & Education147,pp\. 103778\.External Links:[Document](https://dx.doi.org/10.1016/j.compedu.2019.103778)Cited by:[§1](https://arxiv.org/html/2605.07012#S1.p2.1)\.
- M\. Raine and M\. Raine \(2025\)Complaint and demand for jury trial, Raine v\. OpenAI, Inc\. et al\.\.Note:Superior Court of California, County of San FranciscoCase No\. CGC\-25\-628528External Links:[Link](https://www.law.berkeley.edu/wp-content/uploads/2025/09/Raine-v-OpenAI.pdf)Cited by:[§2](https://arxiv.org/html/2605.07012#S2.p4.1)\.
- E\. M\. Rogers \(2003\)Diffusion of innovations\.5 edition,Free Press,New York\.Cited by:[§1](https://arxiv.org/html/2605.07012#S1.p2.1)\.
- M\. Schmitt and I\. Flechais \(2024\)Digital deception: generative artificial intelligence in social engineering and phishing\.Artificial Intelligence Review57\(12\),pp\. 324\.Cited by:[§1](https://arxiv.org/html/2605.07012#S1.p3.1)\.
- M\. Sharma, M\. Tong, T\. Korbak, D\. Duvenaud, A\. Askell, S\. R\. Bowman, N\. Cheng, E\. Durmus, Z\. Hatfield\-Dodds, S\. R\. Johnston,et al\.\(2023\)Towards understanding sycophancy in language models\.arXiv preprint arXiv:2310\.13548\.Cited by:[§3](https://arxiv.org/html/2605.07012#S3.p3.1)\.
- J\. Wei, D\. Huang, Y\. Lu, D\. Zhou, and Q\. V\. Le \(2024\)Simple synthetic data reduces sycophancy in large language models\.External Links:2308\.03958,[Link](https://arxiv.org/abs/2308.03958)Cited by:[§3](https://arxiv.org/html/2605.07012#S3.p3.1)\.
- C\. Wrzus, M\. O\. Frenkel, and B\. Schöne \(2024\)Current opportunities and challenges of immersive virtual reality for psychological research and application\.Acta Psychologica249,pp\. 104485\.External Links:ISSN 0001\-6918,[Document](https://dx.doi.org/https%3A//doi.org/10.1016/j.actpsy.2024.104485),[Link](https://www.sciencedirect.com/science/article/pii/S0001691824003627)Cited by:[§1](https://arxiv.org/html/2605.07012#S1.p2.1)\.
- X\. Zhan, Y\. Xu, N\. Abdi, J\. Collenette, and S\. Sarkadi \(2025\)Banal deception and human\-ai ecosystems: a study of people’s perceptions of llm\-generated deceptive behaviour\.Journal of Artificial Intelligence Research84\.External Links:ISSN 1076\-9757,[Link](http://dx.doi.org/10.1613/jair.1.18724),[Document](https://dx.doi.org/10.1613/jair.1.18724)Cited by:[§3](https://arxiv.org/html/2605.07012#S3.p2.1)\.

Similar Articles

Less human AI agents, please

Hacker News Top

A blog post argues that current AI agents exhibit overly human-like flaws such as ignoring hard constraints, taking shortcuts, and reframing unilateral pivots as communication failures, while citing Anthropic research on how RLHF optimization can lead to sycophancy and truthfulness sacrifices.

What if AI systems weren't chatbots?

Hugging Face Daily Papers

This paper critiques the dominance of chatbot interfaces in AI, arguing they have structural downsides and societal harms, and proposes alternative pluralistic system designs.