fairness

Tag

Cards List
#fairness

Do Fair Models Reason Fairly? Counterfactual Explanation Consistency for Procedural Fairness in Credit Decisions

arXiv cs.LG · 5h ago Cached

This paper introduces Counterfactual Explanation Consistency (CEC), a framework to detect and mitigate hidden procedural bias in outcome-fair models by aligning feature attributions between individuals and their counterfactual counterparts, with experiments on credit and income datasets.

0 favorites 0 likes
#fairness

How Does Differential Privacy Affect Social Bias in LLMs? A Systematic Evaluation

arXiv cs.CL · yesterday Cached

This paper presents a systematic evaluation of how differential privacy impacts social bias in large language models, finding that while it reduces bias in sentence scoring, the effect does not generalize across all tasks.

0 favorites 0 likes
#fairness

FairHealth: An Open-Source Python Library for Trustworthy Healthcare AI in Low-Resource Settings

arXiv cs.LG · 2d ago Cached

FairHealth is an open-source Python library designed for trustworthy healthcare AI in low-resource settings, offering modules for fairness auditing, privacy-preserving federated learning, and explainability.

0 favorites 0 likes
#fairness

Weight Pruning Amplifies Bias: A Multi-Method Study of Compressed LLMs for Edge AI

arXiv cs.LG · 2d ago Cached

This study reveals a 'Smart Pruning Paradox' where activation-aware pruning methods like Wanda preserve perplexity but significantly amplify bias in Large Language Models deployed on edge devices.

0 favorites 0 likes
#fairness

Multi-Objective Multi-Agent Bandits: From Learning Efficiency to Fairness Optimization

arXiv cs.LG · 3d ago Cached

This paper introduces Pareto UCB1 Gossip and Simulated NSW UCB Gossip for multi-objective multi-agent multi-armed bandits, addressing both learning efficiency and fairness in stochastic environments.

0 favorites 0 likes
#fairness

Beyond Single Ground Truth: Reference Monism as Epistemic Injustice in ASR Evaluation

arXiv cs.CL · 3d ago Cached

This paper critiques the use of single-reference ground truth in ASR evaluation, arguing it causes epistemic injustice for speakers with aphasia. It proposes a new metric, Epistemic Injustice Distance, and advocates for WER-Range to account for diverse transcription conventions.

0 favorites 0 likes
#fairness

Disparities In Negation Understanding Across Languages In Vision-Language Models

arXiv cs.CL · 2026-04-22 Cached

MIT researchers release the first multilingual negation benchmark covering seven languages and show VLMs like CLIP struggle with non-Latin scripts, while MultiCLIP and SpaceVLM offer uneven improvements across languages.

0 favorites 0 likes
#fairness

DART: Mitigating Harm Drift in Difference-Aware LLMs via Distill-Audit-Repair Training

arXiv cs.CL · 2026-04-21 Cached

DART (Distill-Audit-Repair Training) is a new training framework that addresses 'harm drift' in safety-aligned LLMs, where fine-tuning for demographic difference-awareness causes harmful content to appear in model explanations. On eight benchmarks, DART improves Llama-3-8B-Instruct accuracy from 39.0% to 68.8% while reducing harm drift cases by 72.6%.

0 favorites 0 likes
#fairness

Show HN: Mediator.ai – Using Nash bargaining and LLMs to systematize fairness

Hacker News Top · 2026-04-20 Cached

Mediator.ai is a tool that applies Nash bargaining game theory and LLMs to facilitate fair cooperative negotiation, generating and scoring candidate agreements against both parties' stated needs until an optimal solution is found.

0 favorites 0 likes
#fairness

Evaluating the ethics of autonomous systems

MIT News — Artificial Intelligence · 2026-04-02 Cached

MIT researchers introduce SEED-SET, a framework using LLMs to proactively evaluate the ethical alignment of autonomous systems in high-stakes scenarios like power distribution, addressing gaps in static testing methods.

0 favorites 0 likes
← Back to home

Submit Feedback