Aid organizations now use AI-generated images of extreme poverty and vulnerable people in social media campaigns. According to The Guardian, global health professionals warn these pictures create a new form of poverty exploitation. The trend raises questions about ethics and consent in development work.
Rise of Synthetic Imagery in Charity Campaigns
Arsenii Alenichev, a researcher at the Institute of Tropical Medicine in Antwerp, collected more than 100 AI-generated images used by individuals and NGOs. The pictures show children in muddy water and African girls in wedding dresses with tears. He calls this phenomenon poverty porn 2.0.
Noah Arnold works at Fairpicture, a Swiss organization focused on ethical imagery. He says many groups actively use AI imagery while others experiment with it. The practice stems from concerns over consent and cost. US funding cuts to NGO budgets made the situation worse.
Stock photo sites now host dozens of AI-generated poverty images. Adobe Stock Photos and Freepik sell licenses for pictures with captions like photorealistic kid in refugee camp. Adobe sells some licenses for about £60. Joaquín Abela, CEO of Freepik, says responsibility lies with media consumers, not platforms.
Major Organizations Deploy Artificial Images
Campaigns Draw Criticism
Plan International released a 2023 video campaign against child marriage. The Dutch arm of the UK charity used AI-generated images of a girl with a black eye and a pregnant teenager. A spokesperson said the organization wanted to safeguard the privacy and dignity of real girls. The charity now advises against using AI to depict individual children.
The UN posted a video last year with AI-generated testimony from a Burundian woman describing sexual violence. The video included artificially created reenactments of conflict-related abuse. The UN removed the content after The Guardian requested comment. A UN Peacekeeping spokesperson called it improper use of AI.
Kate Kardol, an NGO communications consultant, says the images frighten her. She recalls earlier debates about poverty exploitation in the sector. Arnold notes the trend comes after years of discussion about ethical imagery and dignified storytelling. Alenichev warns these biased images may train future AI models and amplify prejudice.