Women in India train AI by watching violent videos all day

Over-the-shoulder view of an anonymous woman in a modest rural Indian room sitting at a small desk, a bright monitor casting warm amber light across her shoulders while the screen displays abstract blurred rectangles of color with heavy mosaic softness that imply disturbing footage without recognizable detail, red digital shards drifting from the glow into a dark cool blue room with simple props like a metal chair and woven wall mat, no faces visible, no text or logos, medium close-up framing.

Women in rural India review hundreds of violent and pornographic videos each day to train AI systems. They classify content flagged by algorithms for global technology companies. Many workers report lasting psychological harm from the job.

Daily Exposure to Disturbing Material

Monsumi Murmu, 26, works from her village in Jharkhand state. She views up to 800 videos and images daily. Her job requires watching scenes of violence and abuse to help algorithms learn.

According to The Guardian, Murmu struggled to sleep in her first months. She saw images in her dreams. Now she describes feeling blank. She says the job has done something to her.

Raina Singh, 24, took a similar role in Uttar Pradesh. Her tasks shifted from text screening to flagging child abuse on adult platforms. Later she categorized pornographic content for hours. She says the work left her disgusted and disconnected from intimacy.

Mental Health Risks Go Unaddressed

Studies show content moderation triggers lasting stress. Workers report intrusive thoughts, anxiety, and sleep problems. A December study found traumatic stress is the most pronounced risk. Support systems often fail to prevent secondary trauma.

Milagros Miceli, a sociologist leading the Data Workers‘ Inquiry, says the work belongs in the category of dangerous labor. She compares it to lethal industries.

A Growing Workforce With Few Protections

An estimated 70,000 people in India worked in data annotation by 2021. About 80% come from rural or marginalized backgrounds. Women make up half or more of this workforce. Tech firms operate in smaller cities where labor costs are lower.

Job listings rarely explain the actual tasks. Workers sign contracts under vague titles like data annotation. They only learn the real work after training begins. Researcher Priyam Vadaliya says this creates an expectation of gratitude that discourages complaints.

The Guardian spoke to eight companies. Only two said they provide psychological support. Others argued the work is not demanding enough to require mental healthcare. Strict non-disclosure agreements prevent workers from discussing their jobs, even with family. Violating these agreements can lead to termination or legal action.

Murmu earns about £260 a month. She fears unemployment more than the distress. She takes long walks in the forest to cope. She says she does not know if it fixes anything, but she feels a little better.

Total
0
Shares
Previous Post
Close-up of Sundar Pichai looking forward against a bright collage of gleaming data center aisles and a large floating Google logo, warm key light on his face with cool blue server glow, saturated Google color accents, tight medium framing with shallow depth of field and crisp editorial realism

Google plans to spend up to $185 billion on AI this year

Next Post
A bright editorial montage with a central glossy OpenAI logo embedded in a luminous circuit sphere while articulated micro-robot arms and cable-like traces assemble it above a sleek generic laptop, vivid cool cyan and warm magenta-orange glow on a clean gradient background, medium-close framing, no text or UI visible.

OpenAI launches GPT-5.3-Codex, a coding AI that helped build itself

Related Posts