AI in photojournalism: Synthetic press images?!

AI-generated, photorealistic images are seen as particularly problematic in the field of documentary, reportage and press photography. We are currently seeing how image agencies, photographers and organizations are reacting.

The World Press Photo Foundation (WPP) has been promoting the work and commitment of photojournalists for 68 years. It has established ethical standards that are crucial to maintaining the integrity of their work. In times of viral disinformation, these standards are now more important than ever. In mid-November, however, the foundation announced that images edited with the help of artificial intelligence (AI) would be permitted in the Open category of the competition. But that's not all: WPP also advertised AI-generated images in its Instagram feed.

"However, such images have no connection to the real world. This connection is at the core of photojournalistic practice and journalism in general," wrote a group of former WPP winners, judges, photo editors and others in an open letter, calling on the organization to reconsider allowing AI in any form for the competition. Just a few days later, World Press Photo rowed back: AI-generated images are excluded from the World Press Photo Contest 2024.

Photo Council warns

Meanwhile, the current conflict in the Middle East has led to an enormous increase in the supply of AI-generated images relating to these events. In many cases, these alleged war images cannot be distinguished from authentic photos, neither by experts nor by technical systems for recognizing AI images.

Under the keyword "Gaza", for example, you can find AI-generated photorealistic images of destroyed cities, highly emotional images of children in landscapes of rubble or portraits of bearded gunmen labeled as "Hamas warriors" in the image description on the Adobe Stock platform, one of the leading image databases.

The images are offered on an equal footing with real photo material, identically tagged and in many cases similarly described in terms of content.

The German Photo Council comments: "There is a high risk that when such images are purchased, despite the AI notice from the provider, signs of artificial generation are overlooked or even ignored in the further editorial processes up to publication. When AI images are published together with photos in the same environment, it suggests the authenticity of the synthetic images and weakens the credibility of the real photos.

The mere suspicion of misleading AI images in journalism triggers an irreversible loss of trust among media users."

The German Photo Council pointed out the dangers for social discourse in its position paper on AI back in April. In light of the current situation, it has specified its demands: "Even if there may be legitimate reasons to create and offer universally valid symbolic images of conflicts or combat operations with the help of AI image generators, image providers must refrain from assigning such fictitious images to specific real events by keywording or labeling them. It is misleading and incites abuse, for example, to label AI images of destroyed cities as "city in the Gaza Strip" or synthetic images of gunmen as "Hamas rebels". The mere labeling of AI images in the distribution of image rights is not sufficient if other image information cannot be distinguished from that of real photos."

The Fotorat is therefore calling for a clear no to the use of AI images in reporting: "Journalistic media must have clear guidelines that rule out the use of AI images in any connection with reporting on current events as a matter of principle. Images must be checked for authenticity before publication. Editorial guidelines must be communicated transparently to media users. It must not be the duty of media users to verify the authenticity of every image by means of any labels in image captions." The Fotorat therefore suggests clarifying the German Press Council's code of conduct on the use of symbolic images with regard to AI-generated images.

"Special responsibility is also required when using AI images in publications by public bodies, NGOs or political parties, as users place particular trust in these sources. Symbolic images, especially photorealistic images generated with AI, must be clearly labeled as such. The German Photo Council calls on those responsible to consider in each individual case whether a potentially particularly effective AI image really has a meaningful function or whether it is not being used more out of convenience or cost pressure. In case of doubt, an authentic image is preferable," says the umbrella organization of leading photographer associations.

Paris Charta

The Paris Charter for AI and Journalism, published in mid-November on the occasion of the Paris Peace Forum, aims in a similar direction. The charter was drawn up by a commission initiated by Reporters Without Borders (RSF) and chaired by journalist and Nobel Peace Prize winner Maria Ressa in cooperation with organizations, experts in artificial intelligence, media representatives and journalists. The aim: to define a set of fundamental ethical principles to protect the integrity of news and information in the age of AI.

Maria Ressa: "Artificial intelligence could provide remarkable services to humanity, but it clearly has the potential to amplify the manipulation of thought on an unprecedented scale. The Paris Charter is the first international ethical guideline for AI and journalism. Factual evidence, a clear distinction between authentic and synthetic content, editorial independence and human responsibility will be the most important safeguards for the right to reliable news and information in the age of AI. More than ever, journalism needs a solid and universally recognized ethical foundation."