AI workers warn families: do not use chatbots

Over-the-shoulder scene of a parent raising a gentle stop hand toward a laptop as luminous multicolored data streams and empty chat bubbles pour from the screen beside a child silhouette, warm lamplight contrasting with cool blue-teal glow, shallow depth of field and medium shot for intimate drama.

Workers who train and evaluate AI models are warning their friends and families to stay away from chatbots and image generators. These raters check AI responses for accuracy but say they see too many problems to trust the technology themselves.

Krista Pawloski works on Amazon Mechanical Turk, a platform where companies hire people to assess AI outputs. She moderates AI-generated text, images, and videos. Two years ago, she nearly labeled a racist tweet as acceptable because she did not recognize a slur. The mistake made her realize how many errors she and thousands of others might miss.

According to The Guardian, Pawloski now refuses to use generative AI at home. She tells her teenage daughter to avoid tools like ChatGPT. She also encourages people to test chatbots on topics they know well so they can spot mistakes.

Speed Over Quality

A dozen AI raters told The Guardian they distrust the models they work on. They point to rushed timelines and minimal training as major concerns. Workers say companies push for quick results instead of careful accuracy checks.

One Google AI rater evaluates responses from Google Search’s AI Overviews. She avoids using AI when possible and forbids her 10-year-old daughter from using chatbots. She worries her child needs critical thinking skills first. The rater also noted that colleagues without medical training evaluate health-related AI responses.

False Information in Confident Tones

NewsGuard, a media literacy nonprofit, audited the top 10 generative AI models in August 2025. The audit included ChatGPT, Gemini, and Meta’s AI. Non-response rates dropped from 31 percent in August 2024 to zero in August 2025. But chatbots repeating false information nearly doubled from 18 percent to 35 percent.

Brook Hansen, another Amazon Mechanical Turk worker, said companies prioritize speed and profit over responsibility. Workers receive vague instructions and unrealistic time limits. She believes this gap undermines safety and accuracy.

Workers Sound the Alarm

Alex Mahadevan directs MediaWise at Poynter, a media literacy program. He said the disconnect shows companies value shipping products over careful validation. The public increasingly relies on AI for news and information, but errors persist.

One AI tutor who worked with Gemini, ChatGPT, and Grok said the team jokes that chatbots would be great if they stopped lying. Another Google rater asked the model about Palestinian history but got no answer. When he asked about Israeli history, the model provided extensive information.

Hansen compares AI ethics to the textile industry. Consumers once ignored how cheap clothes were made. Now they ask questions about labor and environmental costs. She believes AI deserves the same scrutiny.

Total
0
Shares
Previous Post
A bright editorial collage featuring the official Google logo and the Gmail envelope logo at center, surrounded by floating translucent email envelopes, a glowing on off toggle switch and shield and lock icons, cool blue background with warm red and yellow accents, clean medium close up framing, high contrast, no text

Gmail’s new AI tools read 2 billion users‘ private messages

Next Post
Neutral portraits of Sundar Pichai and Amin Vahdat in a clean editorial collage with the Google multicolor G logo between them, a luminous lattice of TPU chip tiles behind, warm orange highlights against cool cobalt and teal, medium close-up faces centered with a bright background, no text or numbers.

Google must double AI capacity every six months to keep up

Related Posts