Charities use fake AI images of poor kids to raise money

Close-up of a white porcelain theater mask half brushed with muddy paint and overlaid by a subtle shimmering 3D wireframe, beside an unlabeled clear tin partly filled with glowing geometric shards, dramatic warm spotlight against cool cyan backdrop, crisp photoreal detail, high brightness, no people, no text.

Aid organizations now use AI-generated images of extreme poverty and vulnerable people in social media campaigns. According to The Guardian, global health professionals warn these pictures create a new form of poverty exploitation. The trend raises questions about ethics and consent in development work.

Rise of Synthetic Imagery in Charity Campaigns

Arsenii Alenichev, a researcher at the Institute of Tropical Medicine in Antwerp, collected more than 100 AI-generated images used by individuals and NGOs. The pictures show children in muddy water and African girls in wedding dresses with tears. He calls this phenomenon poverty porn 2.0.

Noah Arnold works at Fairpicture, a Swiss organization focused on ethical imagery. He says many groups actively use AI imagery while others experiment with it. The practice stems from concerns over consent and cost. US funding cuts to NGO budgets made the situation worse.

Stock photo sites now host dozens of AI-generated poverty images. Adobe Stock Photos and Freepik sell licenses for pictures with captions like photorealistic kid in refugee camp. Adobe sells some licenses for about £60. Joaquín Abela, CEO of Freepik, says responsibility lies with media consumers, not platforms.

Major Organizations Deploy Artificial Images

Campaigns Draw Criticism

Plan International released a 2023 video campaign against child marriage. The Dutch arm of the UK charity used AI-generated images of a girl with a black eye and a pregnant teenager. A spokesperson said the organization wanted to safeguard the privacy and dignity of real girls. The charity now advises against using AI to depict individual children.

The UN posted a video last year with AI-generated testimony from a Burundian woman describing sexual violence. The video included artificially created reenactments of conflict-related abuse. The UN removed the content after The Guardian requested comment. A UN Peacekeeping spokesperson called it improper use of AI.

Kate Kardol, an NGO communications consultant, says the images frighten her. She recalls earlier debates about poverty exploitation in the sector. Arnold notes the trend comes after years of discussion about ethical imagery and dignified storytelling. Alenichev warns these biased images may train future AI models and amplify prejudice.

Total
0
Shares
Previous Post
A bold central OpenAI logo formed within a glossy magnifying glass hovering over scattered white papers with abstract diagrams on a wooden desk, an unlit lightbulb in the background contrasted by a small spark of light, warm golden foreground against cool teal backdrop, dramatic close-up with shallow depth of field, high brightness, no text.

OpenAI caught claiming AI solved math problems it just looked up

Next Post
A modern data center on the edge of a small Mexican town at night, the facility glowing cool blue with a clear Microsoft logo on the facade, foreground shows dark terracotta homes, idle power lines and empty water jugs in soft shadow, dramatic warm streetlight spill against a deep blue sky, medium-wide framing with the data center centered, high contrast and vivid color, no faces or text.

Doctor stitches kids by flashlight after AI center cuts power

Related Posts