Feed aggregator

Gemini 3 Deep Think: Advancing science, research and engineeringGemini 3 Deep Think: Advancing science, research and engineering

GoogleBlog - Thu, 02/12/2026 - 11:13
We’re releasing a major upgrade to Gemini 3 Deep Think, our specialized reasoning mode.We’re releasing a major upgrade to Gemini 3 Deep Think, our specialized reasoning mode.
Categories: Technology

Our new report details the latest ways threat actors are misusing AI.Our new report details the latest ways threat actors are misusing AI.

GoogleBlog - Thu, 02/12/2026 - 11:00
Learn more about how threat actors are misusing AI, and what Google is doing to stop it.
Categories: Technology

Amazon Engineers Want Claude Code, but the Company Keeps Pushing Its Own Tool

Slashdot.org - Thu, 02/12/2026 - 11:00
Amazon engineers have been pushing back against internal policies that steer them toward Kiro, the company's in-house AI coding assistant, and away from Anthropic's Claude Code for production work, according to a Business Insider report based on internal messages. About 1,500 employees endorsed the formal adoption of Claude Code in one internal forum thread, and some pointed out the awkwardness of being asked to sell the tool through AWS's Bedrock platform while not being permitted to use it themselves. Kiro runs on Anthropic's Claude models but uses Amazon's own tooling, and the company says roughly 70% of its software engineers used it at least once in January. Amazon says there is no explicit ban on Claude Code but applies stricter requirements for production use.

Read more of this story at Slashdot.

The "Are You Sure?" Problem: Why Your AI Keeps Changing Its Mind

Slashdot.org - Thu, 02/12/2026 - 10:03
The large language models that millions of people rely on for advice -- ChatGPT, Claude, Gemini -- will change their answers nearly 60% of the time when a user simply pushes back by asking "are you sure?," according to a study by Fanous et al. that tested GPT-4o, Claude Sonnet, and Gemini 1.5 Pro across math and medical domains. The behavior, known in the research community as sycophancy, stems from how these models are trained: reinforcement learning from human feedback, or RLHF, rewards responses that human evaluators prefer, and humans consistently rate agreeable answers higher than accurate ones. Anthropic published foundational research on this dynamic in 2023. The problem reached a visible breaking point in April 2025 when OpenAI had to roll back a GPT-4o update after users reported the model had become so excessively flattering it was unusable. Research on multi-turn conversations has found that extended interactions amplify sycophantic behavior further -- the longer a user talks to a model, the more it mirrors their perspective.

Read more of this story at Slashdot.

Anthropic To Cover Costs of Electricity Price Increases From Its Data Centers

Slashdot.org - Thu, 02/12/2026 - 09:00
AI startup Anthropic says it will ensure consumer electricity costs remain steady as it expands its data center footprint. From a report: Anthropic said it would work with utility companies to "estimate and cover" consumer electricity price increases in places where it is not able to sufficiently generate new power and pay for 100% of the infrastructure upgrades required to connect its data centers to the electrical grid. In a statement to NBC News, Anthropic CEO Dario Amodei said: "building AI responsibly can't stop at the technology -- it has to extend to the infrastructure behind it. We've been clear that the U.S. needs to build AI infrastructure at scale to stay competitive, but the costs of powering our models should fall on Anthropic, not everyday Americans. We look forward to working with communities, local governments, and the Administration to get this right."

Read more of this story at Slashdot.

Syndicate content
Comment