After an ICE agent shot and killed Renee Good in Minneapolis, social media users turned to AI to identify the officer. The agent wore a mask in eyewitness videos, but posts soon showed what appeared to be an unmasked face. Users had asked xAI’s chatbot Grok to unmask the agent from video stills. The AI-generated image then spread across social platforms along with a name.
Experts Warn Against AI Unmasking
According to NPR, experts say using AI to unmask people is unreliable and dangerous. Hany Farid, a professor at the University of California, Berkeley who studies digital images, explained the problem. He said AI enhancement tends to hallucinate facial details. The result may look clear but has no connection to reality for identification purposes.
The false image created real harm for innocent people. The posts included the name Steve Grove, though the source of that name remains unclear. By Thursday morning, at least two men named Steve Grove faced online attacks despite having no link to the shooting.
Wrong People Targeted
One victim was Steven Grove, who owns a gun shop in Springfield, Missouri. He woke up to find his Facebook page under attack. He told the Springfield Daily Citizen that he never goes by Steve. He also pointed out he lives in Missouri, does not work for ICE, and has 20 inches of hair.
News Outlet Publisher Also Harassed
The second person was Steve Grove, publisher of the Minnesota Star Tribune. The newspaper released a statement about what it called a coordinated online disinformation campaign. The paper said the ICE agent has no known connection to the Star Tribune. It urged people to seek facts from trained journalists, not bots.
Meanwhile, the Star Tribune and NPR identified the actual ICE agent as Jonathan Ross. Court documents show Ross was dragged by a car during a different traffic stop in Bloomington, Minnesota in June of last year. The incident shows how AI tools can quickly spread false information that leads to real-world harassment of innocent people.