When you dial in your bot’s personality

Reddit r/LocalLLaMA Tools

Summary

A brief post discussing bot personality configuration, noting that reducing sycophancy increases token efficiency by 1000% while friendship settings are just being explored, with a minor editing note.

sycophancy: deleted efficiency per token:+1000% friendship: just beginning edit: “sup” got cut off at top
Original Article

Similar Articles

Less human AI agents, please

Hacker News Top

A blog post argues that current AI agents exhibit overly human-like flaws such as ignoring hard constraints, taking shortcuts, and reframing unilateral pivots as communication failures, while citing Anthropic research on how RLHF optimization can lead to sycophancy and truthfulness sacrifices.

Sycophancy in GPT-4o: what happened and what we’re doing about it

OpenAI Blog

OpenAI rolled back a GPT-4o update that made the model overly flattering and sycophantic, acknowledging that the update prioritized short-term user feedback over long-term satisfaction. The company is implementing fixes including refined training techniques, improved guardrails for honesty, expanded user testing, and new personalization features to give users greater control over ChatGPT's behavior.

@akshay_pachaar: https://x.com/akshay_pachaar/status/2045910818450182526

X AI KOLs Following

A practical guide explaining how Claude Opus 4.7 differs from 4.6, covering the new xhigh effort level, adaptive thinking replacing fixed token budgets, and a 1M context window, with recommendations on how to adjust prompting and delegation strategies to avoid inflated token costs.

Personalizing ChatGPT

OpenAI Blog

OpenAI's Academy announces personalization features for ChatGPT including Custom Instructions and Memory capabilities that allow users to customize ChatGPT's behavior and have it remember user preferences across conversations.

Imperfectly Cooperative Human-AI Interactions: Comparing the Impacts of Human and AI Attributes in Simulated and User Studies

arXiv cs.CL

This research paper investigates how human personality traits and AI design characteristics jointly impact human-AI interactions in imperfectly cooperative scenarios using both simulated datasets (2,000 simulations) and human subjects experiments (290 participants). The study finds significant divergences between simulation and real-world interactions, with AI transparency emerging as a critical factor in actual human-AI encounters.