i got banned for asking help about AI stealing my photos... because my english is not good?

Reddit r/ArtificialInteligence News

Summary

A photographer expresses frustration over being banned from online communities for seeking help against unauthorized AI training of their work, attributing the ban to language barriers and biased moderation.

look i'm a professional photographer from Greece and i'm really angry right now. i found out my photos are being used to train AI models without anyone asking me. so i go to some forums to ask what i can do legal and how to protect my work. and what happens? i get deleted or banned. they tell me i sound like a bot. why? because i use tools to help me write better english because it's not my first language. so if you are not from UK or USA you dont have a voice here? is this digital racism or what? AI steals my light and my work, and when i use AI just to speak to you and find justice, you kick me out. this is crazy. 80% of the world doesn't speak perfect english, so we just stay silent while big tech takes everything? anyway i just want to know if any other photographer here had the same problem with platforms banning him because he tried to fight for his copyright. sorry for my bad english i'm just tired of this.
Original Article

Similar Articles

I'm Sick of AI Everything

Hacker News Top

Hacker News discussion where users express frustration with AI saturation and compare it to social-media burnout.

English Centric AI Is Merging Unrelated Communities and Distorting Identities

Reddit r/artificial

The article critiques how AI systems, particularly Grokipedia and AI search, perpetuate errors by merging unrelated communities due to English-centric transliteration and biased training data. It highlights the systemic issue of erasing cultural distinctions through simplified English representations and repeated misinformation.

IYKYK (But AI Doesn't): Automated Content Moderation Does Not Capture Communities' Heterogeneous Attitudes Towards Reclaimed Language

arXiv cs.CL

Researchers from UCLA examine how automated content moderation tools, including Perspective API, fail to distinguish between reclaimed and hateful uses of slurs for LGBTQIA+, Black, and women communities. The study finds low inter-annotator agreement even among in-group members and poor alignment between community judgments and AI moderation tools, highlighting the need for context-sensitive approaches.