Nadine Ibrahim
May 4, 2026Advertisement
A retired journalist was falsely accused of serious crimes by Microsoft Copilot. The AI linked his name to criminal cases he once reported on and published personal details, including his address and phone number. Experts say such "AI hallucinations" arise from statistical pattern matching rather than facts. Research shows chatbots frequently generate confident but inaccurate claims. When the journalist sought legal action, he initially failed, revealing how difficult accountability remains. The case exposes major gaps in data protection, liability, and oversight - and underscores growing calls for stronger regulation of generative AI before further real world harm occurs.
