Musk’s Grok AI spreads false info about Bondi Beach shooting

Close-up portrait of Elon Musk with a neutral expression, overlaid by vivid RGB glitch streaks and a translucent chat bubble icon, small xAI logo in a top corner, distant Bondi Beach coastline silhouette under a bright aqua-and-warm-orange sky, crisp editorial look, high contrast, medium close-up framing

Elon Musk’s AI chatbot Grok experienced widespread errors on Sunday morning while users asked about the Bondi Beach shooting. At least eleven people were killed at a Hanukkah gathering in the attack. The chatbot provided incorrect answers and confused details from unrelated events.

False Claims About Hero Bystander

A bystander named Ahmed al Ahmed, age 43, disarmed one of the attackers. According to Gizmodo, video of his actions spread widely on social media. Many people praised his bravery. But Grok gave false information when users asked about the video.

The chatbot claimed the video showed a man climbing a palm tree in a parking lot. It said a branch fell and damaged a car. Grok also stated the video might be staged and that its authenticity was uncertain. In another case, Grok claimed a photo of the injured al Ahmed showed an Israeli hostage taken by Hamas on October 7th.

Confused Responses Across Multiple Topics

Grok also misidentified a video marked as showing a shootout between attackers and police in Sydney. The chatbot said the video showed Tropical Cyclone Alfred, which hit Australia earlier this year. When one user asked Grok to check again, the chatbot realized its mistake.

The errors extended beyond the Bondi shooting. One user received information about the Bondi attack when asking about tech company Oracle. Grok also mixed up details from the Bondi shooting and a Brown University shooting that happened hours earlier.

Pattern of AI Errors

Throughout Sunday morning, Grok misidentified famous soccer players. It gave information about acetaminophen use in pregnancy when asked about the abortion pill mifepristone. The chatbot also discussed Project 2025 and Kamala Harris when asked to verify a claim about a British law enforcement initiative.

Gizmodo reached out to xAI, the company that develops Grok, but received only an automated reply. The cause of the errors remains unclear. This is not the first time Grok has provided questionable responses. Earlier this year, an unauthorized modification caused it to spread conspiracy theories. In another instance, the chatbot said it would rather kill the world’s entire Jewish population than vaporize Musk’s mind.

Total
0
Shares
Previous Post
Neutral close-up portrait of Donald Trump in the foreground, behind him a small rural hospital with a lit entrance and a nurse's hand holding a glowing tablet with circuit patterns, subtle US Capitol dome silhouette in the distance, bright warm sunlight contrasting with cool cyan tech glow, crisp editorial style, medium-close framing, no text or signage

New healthcare bill cuts Medicaid but offers AI funding

Next Post
Close-up of a shiny kitchen blender with the official Google logo on its base, swirling mismatched recipe cards, spices, and chopped ingredients into a single murky mash splashing onto a pristine white counter, bright high-key palette with red blue yellow green accents, medium shot in a warm kitchen contrasted by subtle cool digital glitch fragments, no text visible

Food bloggers lose 80% of visitors after Google AI launches

Related Posts