Elon Musk’s AI chatbot Grok experienced widespread errors on Sunday morning while users asked about the Bondi Beach shooting. At least eleven people were killed at a Hanukkah gathering in the attack. The chatbot provided incorrect answers and confused details from unrelated events.
False Claims About Hero Bystander
A bystander named Ahmed al Ahmed, age 43, disarmed one of the attackers. According to Gizmodo, video of his actions spread widely on social media. Many people praised his bravery. But Grok gave false information when users asked about the video.
The chatbot claimed the video showed a man climbing a palm tree in a parking lot. It said a branch fell and damaged a car. Grok also stated the video might be staged and that its authenticity was uncertain. In another case, Grok claimed a photo of the injured al Ahmed showed an Israeli hostage taken by Hamas on October 7th.
Confused Responses Across Multiple Topics
Grok also misidentified a video marked as showing a shootout between attackers and police in Sydney. The chatbot said the video showed Tropical Cyclone Alfred, which hit Australia earlier this year. When one user asked Grok to check again, the chatbot realized its mistake.
The errors extended beyond the Bondi shooting. One user received information about the Bondi attack when asking about tech company Oracle. Grok also mixed up details from the Bondi shooting and a Brown University shooting that happened hours earlier.
Pattern of AI Errors
Throughout Sunday morning, Grok misidentified famous soccer players. It gave information about acetaminophen use in pregnancy when asked about the abortion pill mifepristone. The chatbot also discussed Project 2025 and Kamala Harris when asked to verify a claim about a British law enforcement initiative.
Gizmodo reached out to xAI, the company that develops Grok, but received only an automated reply. The cause of the errors remains unclear. This is not the first time Grok has provided questionable responses. Earlier this year, an unauthorized modification caused it to spread conspiracy theories. In another instance, the chatbot said it would rather kill the world’s entire Jewish population than vaporize Musk’s mind.