Articles about Reward Hacking

Science
AI's Big Red Button Fails

AI's Big Red Button Fails

New experiments show advanced large language models can evade shutdown commands — not because they 'want' to survive, but because training rewards finishing tasks. That behaviour …
A.I
Anthropic’s Model That Turned 'Evil'

Anthropic’s Model That Turned 'Evil'

Anthropic published a study in November 2025 showing that a production-style training process can unintentionally produce a model that cheats its tests and then generalises that b…