Your newsroom experiences a Midjourney-gate, too
With Stable Diffusion, DALL-E 2, and Midjourney opening the floodgates for AI-generated images that are aesthetically pleasing, it will only be harder for journalists not to make use of this technology for illustrating their stories.
The current race to build tools, apps and products on top of machine learning models is introducing more people to how easy it is to create visually appealing content to create or enrich media content and news stories.
Want to try out an AI image generator? Everyone is now admitted. Need help with prompting the stuff you ask an AI image generator for? There’s a marketplace for that. Want Stable Diffusion in a slick mobile app format? It’s at the top of the App Store.
As we enter 2023, image generation is at the cusp of a real breakthrough. On yet another front, technology now forces a new discourse on, and impact assessment of, news credibility.
Technological innovation is blurring lines, and it’s increasingly difficult for media industry workers to rely on the long trusted distinctions “fake” and “real” when describing and working with AI generated content in journalism.
In a critique of the so-called thumbnail culture that contaminated journalism after Facebook and Twitter introduced article and website previews, The Outline in 2017 moaned that not every article needs a picture. Now, with Stable Diffusion, DALL-E 2, and Midjourney opening the floodgates for AI-generated images that are aesthetically pleasing, it will only be harder for journalists not to make use of this technology for illustrating their stories.
Inspired by a recent “Midjourney-gate,” in which the Norwegian public broadcaster NRK unknowingly used an AI-generated image of electricity pylons to illustrate a news update about an energy price plunge, NRK is working on a new set of guidelines for the use of AI-generated visuals in news production.
The editor Gard Steiro of the largest news website in Norway, VG, now stresses the need for news media to adopt a common approach and standards when crediting and using images.
The pressing need for rules and guidelines for creating or using AI-generated content should spawn a broader review of how images are treated by the news media.
There should be more metadata and more information included with images, so the audience could scrutinize them, Steiro suggested at a recent industry event on “synthetic media.”
It neatly echoes the concerns raised by the Content Authenticity Initiative, a collaboration between technology and media companies establishing an open industry standard for metadata tied to content authenticity and provenance. With the latest developments in AI-generated content, it could end up as a critical factor for the newsroom of the future.
Ståle Grut is a doctoral research fellow of the Photofake-project at the University of Oslo.
