Workers who train and evaluate AI models are warning their friends and families to stay away from chatbots and image generators. These raters check AI responses for accuracy but say they see too many problems to trust the technology themselves.
Krista Pawloski works on Amazon Mechanical Turk, a platform where companies hire people to assess AI outputs. She moderates AI-generated text, images, and videos. Two years ago, she nearly labeled a racist tweet as acceptable because she did not recognize a slur. The mistake made her realize how many errors she and thousands of others might miss.
According to The Guardian, Pawloski now refuses to use generative AI at home. She tells her teenage daughter to avoid tools like ChatGPT. She also encourages people to test chatbots on topics they know well so they can spot mistakes.
Speed Over Quality
A dozen AI raters told The Guardian they distrust the models they work on. They point to rushed timelines and minimal training as major concerns. Workers say companies push for quick results instead of careful accuracy checks.
One Google AI rater evaluates responses from Google Search’s AI Overviews. She avoids using AI when possible and forbids her 10-year-old daughter from using chatbots. She worries her child needs critical thinking skills first. The rater also noted that colleagues without medical training evaluate health-related AI responses.
False Information in Confident Tones
NewsGuard, a media literacy nonprofit, audited the top 10 generative AI models in August 2025. The audit included ChatGPT, Gemini, and Meta’s AI. Non-response rates dropped from 31 percent in August 2024 to zero in August 2025. But chatbots repeating false information nearly doubled from 18 percent to 35 percent.
Brook Hansen, another Amazon Mechanical Turk worker, said companies prioritize speed and profit over responsibility. Workers receive vague instructions and unrealistic time limits. She believes this gap undermines safety and accuracy.
Workers Sound the Alarm
Alex Mahadevan directs MediaWise at Poynter, a media literacy program. He said the disconnect shows companies value shipping products over careful validation. The public increasingly relies on AI for news and information, but errors persist.
One AI tutor who worked with Gemini, ChatGPT, and Grok said the team jokes that chatbots would be great if they stopped lying. Another Google rater asked the model about Palestinian history but got no answer. When he asked about Israeli history, the model provided extensive information.
Hansen compares AI ethics to the textile industry. Consumers once ignored how cheap clothes were made. Now they ask questions about labor and environmental costs. She believes AI deserves the same scrutiny.