Tag
Essay argues that avoiding AI tools cedes influence over their training data, risking biased models that repeat historical under-representation seen in gaming and past discriminatory AI systems.
HornetDev team published a post on tuning approximate-nearest-neighbor search at 100M scale, covering embedding bias, graph connectivity, and quantization limits.
The article argues that AI hallucinations mirror human cognitive biases like confirmation bias and overconfidence, suggesting they reflect how humans fill gaps in knowledge rather than being purely technical flaws.
Researchers from UCLA examine how automated content moderation tools, including Perspective API, fail to distinguish between reclaimed and hateful uses of slurs for LGBTQIA+, Black, and women communities. The study finds low inter-annotator agreement even among in-group members and poor alignment between community judgments and AI moderation tools, highlighting the need for context-sensitive approaches.
OpenAI published a study examining how subtle identity cues like user names can influence ChatGPT's responses, introducing the concept of 'first-person fairness' to evaluate whether name-based biases lead to harmful stereotypes in direct user interactions. The research highlights limitations including a focus on English-language, binary gender, and four racial/ethnic categories.
Google DeepMind's 'A.I. in the Classroom' program teaches students foundational AI concepts like data needs, bias, and large language models, aiming to empower tomorrow's problem-solvers through interactive discussions.