Feed aggregator

The "Are You Sure?" Problem: Why Your AI Keeps Changing Its Mind

Slashdot.org - Thu, 02/12/2026 - 10:03
The large language models that millions of people rely on for advice -- ChatGPT, Claude, Gemini -- will change their answers nearly 60% of the time when a user simply pushes back by asking "are you sure?," according to a study by Fanous et al. that tested GPT-4o, Claude Sonnet, and Gemini 1.5 Pro across math and medical domains. The behavior, known in the research community as sycophancy, stems from how these models are trained: reinforcement learning from human feedback, or RLHF, rewards responses that human evaluators prefer, and humans consistently rate agreeable answers higher than accurate ones. Anthropic published foundational research on this dynamic in 2023. The problem reached a visible breaking point in April 2025 when OpenAI had to roll back a GPT-4o update after users reported the model had become so excessively flattering it was unusable. Research on multi-turn conversations has found that extended interactions amplify sycophantic behavior further -- the longer a user talks to a model, the more it mirrors their perspective.

Read more of this story at Slashdot.

Anthropic To Cover Costs of Electricity Price Increases From Its Data Centers

Slashdot.org - Thu, 02/12/2026 - 09:00
AI startup Anthropic says it will ensure consumer electricity costs remain steady as it expands its data center footprint. From a report: Anthropic said it would work with utility companies to "estimate and cover" consumer electricity price increases in places where it is not able to sufficiently generate new power and pay for 100% of the infrastructure upgrades required to connect its data centers to the electrical grid. In a statement to NBC News, Anthropic CEO Dario Amodei said: "building AI responsibly can't stop at the technology -- it has to extend to the infrastructure behind it. We've been clear that the U.S. needs to build AI infrastructure at scale to stay competitive, but the costs of powering our models should fall on Anthropic, not everyday Americans. We look forward to working with communities, local governments, and the Administration to get this right."

Read more of this story at Slashdot.

Syndicate content
Comment