1. Skip to content
  2. Skip to main menu
  3. Skip to more DW sites
ScienceGlobal issues

AI chatbot falsely accuses journalist of crimes

05:09

This browser does not support the video element.

Nadine Ibrahim
May 4, 2026

Microsoft Copilot falsely linked a journalist to crimes and exposed personal data - highlighting the dangers of AI hallucinations and weak regulation.

A retired journalist was falsely accused of serious crimes by Microsoft Copilot. The AI linked his name to criminal cases he once reported on and published personal details, including his address and phone number. Experts say such "AI hallucinations" arise from statistical pattern matching rather than facts. Research shows chatbots frequently generate confident but inaccurate claims. When the journalist sought legal action, he initially failed, revealing how difficult accountability remains. The case exposes major gaps in data protection, liability, and oversight - and underscores growing calls for stronger regulation of generative AI before further real world harm occurs.

Skip next section DW's Top Story

DW's Top Story

Skip next section More stories from DW

More stories from DW