Inside Google’s hidden team rating Gemini and AI Overviews

Cinematic medium-wide view of anonymous silhouetted workers at a long row of glowing monitors as a conveyor of luminous chat-bubble cards moves past red and green stamp pads, a giant translucent hourglass pours shimmering pixels overhead, electric blue and cyan glow contrasted with warm amber highlights, crisp focus, no text, no logos, neutral editorial tone.

Thousands of contract workers help review and shape Google’s AI outputs. They describe tight deadlines, low pay, and shifting rules while they rate Gemini and AI Overviews. According to The Guardian, these workers say the job is stressful and often hidden.

The “shadow workforce” behind Gemini

Workers hired through Hitachi’s GlobalLogic rate responses for Gemini and AI Overviews. Some also moderate violent or sexual content from the AI.

Rachael Sawyer, a generalist rater since March 2024, said she received no warning about distressing tasks. She reported anxiety and panic attacks, and no mental health support.

GlobalLogic split raters into generalist and super rater groups. The company grew from 25 super raters in 2023 to almost 2,000, workers said. Many are in the US and work in English.

Pay, hiring, and pressure

Workers said pay starts at $16 an hour for generalists and $21 for super raters. Many are teachers and writers, and some hold advanced degrees.

Ten workers told The Guardian they face shrinking task timers and siloed work. One said her task time fell from 30 minutes to 15, while responses could run about 500 words.

A 2023 letter from an Appen contractor to the US Congress warned that the pace could make Bard a “faulty” and “dangerous” product.

Shifting guidelines and safety concerns

Raters described rating two sample answers per prompt, checking factuality and source quality. They said guidance changed often, and context was limited.

Some handled “sensitivity tasks” with prompts on corruption or child soldiers. One worker said consensus meetings could favor more forceful voices.

After AI Overviews told users to put glue on pizza and to eat rocks, Google focused on “quality,” workers said. They added the push did not last.

In December 2024, contractors were told not to skip prompts for lack of expertise, per a report cited by The Guardian. Workers should rate parts they understood and note gaps.

In April, raters received new rules that treated repeating user-provided hate or explicit content differently from generating it. Google said its AI policies have not changed on hate speech. The Guardian noted a December 2024 policy clause allowing exceptions for art or education.

Workers also reported rolling layoffs in 2025, with teams shrinking to roughly 1,500. Many said they now avoid using LLMs because they know how they are built.

Total
0
Shares
Pridaj komentár

Vaša e-mailová adresa nebude zverejnená. Vyžadované polia sú označené *

Previous Post
A close editorial portrait of Larry Ellison looking slightly upward with a calm confidence, warm key light on his face against a cinematic backdrop of towering glowing server racks and flowing blue data streams, bokeh sparks and shallow depth of field, high-contrast composition with vivid warm oranges versus cool teals, text-free and logo-free.

Report says OpenAI signs $300 billion Oracle deal

Next Post
Medium close-up of Albania’s Prime Minister Edi Rama in a neutral suit, side-lit and looking toward a luminous pixelated female silhouette on a translucent screen shaped like the map of Albania, warm red accents against cool cyan and white glow, clean studio background with soft bokeh, high-key brightness, editorial realism, no text or logos

Can AI make Albania’s public bids fair and clean?

Related Posts