A growing number of artificial intelligence generated images are being used by non governmental organisations and stock photo platforms to depict extreme poverty, malnourished children and survivors of sexual violence, according to global health professionals. The practice has raised serious ethical concerns about dignity, representation and consent.
What’s Happening
- Researchers have collected over 100 AI-generated images used in campaigns by NGOs, featuring scenes of children in muddy water, barren earth and exaggerated depictions of suffering. These are being described as a new phase of “poverty porn”.
- These images are now available on popular stock image platforms and are used in social media campaigns and fundraising drives. They often bypass the usual costs and consent processes involved in real world photography.
- One expert said the visuals “replicate the visual grammar of poverty children with empty plates, cracked earth, stereotypical visuals.”
- Some organisations defend the practice by claiming the use of synthetic images addresses issues of consent, since no real person is depicted, but critics say it still objectifies and mis-represents communities.
- Stock platforms are profiting from the trend: AI-generator images tagged with keywords like “poverty” or “refugee camp” are being licensed for tens of dollars.
- Beyond fundraising, there is concern that such imagery may influence AI training data and entrench harmful stereotypes especially when representations overwhelmingly depict Black and Brown bodies in situations of vulnerability.
Ethical & Practical Concerns
- Representation and dignity: When suffering is packaged as spectacle, it can reduce human beings to symbols of charity rather than agents of change. Many campaigners say this undermines the dignity of those portrayed.
- Consent and authenticity: Real world photography meant to follow ethical standards around permission, identity protection, and contextual accuracy is being replaced by synthetic imagery, which often lacks transparency and traceability.
- Stereotyping and bias: These visuals can reinforce racial and cultural stereotypes for instance, always showing African children in extreme conditions and may distort reality more than illuminate it.
- Impact on trust: If donors or the public suspect images are fake or manipulated, it can erode trust in humanitarian organisations and the causes they represent.
- AI feedback loop: The proliferation of such images may feed into future generative AI training sets, amplifying biased imagery or mis-representations in new contexts.
What Organisations Are Saying & Doing
- Some NGOs have announced updated guidance to avoid using AI-generated images of identifiable children or communities, especially without disclosure.
- Stock photo platforms claim they are aware of bias issues and have introduced filters or labels for AI-generated content, however critics say that self regulation is insufficient.
- Researchers are calling for clearer international standards on the use of synthetic imagery in humanitarian and global health communications including requirements around labeling, context, and human centred storytelling.
Why This Matters
The global aid sector relies heavily on visual storytelling to raise awareness and funds. With the rise of generative AI, the tools have changed but the ethical stakes remain high. If synthetic imagery replaces authentic, context rich human photography without transparent disclosure, the risk is that real suffering becomes commodified, power dynamics remain unchecked and public confidence is undermined.















