ANALYSIS: The National Party’s mini-scandal in May over its use of AI-generated images in political attack ads is an eerie portent of things to come.
The images, probably the product of Midjourney, an artificial-intelligence-driven image generator that responds to users’ text prompts, looked like the sort of fare served up by stock photo providers Shutterstock and iStock.
But the odd gaze of the woman looking apprehensively out a window in one image caused many to take a second glance. The strange-looking faces under the ski masks of the thieves raiding a jewellery store were really what gave the game away. National had to own up to the fact that they were indeed AI-generated – good but not perfect. The party defended them as “an innovative way to drive our social media”.
National is right – they are innovative. But they are also troubling, and represent the new wave of AI-generated content quickly filling up the web that is going to see all of us experience a sense of cognitive dissonance as we consume words, images, videos and audio clips that don’t seem completely real.
As The Verge writer James Vincent put it last week, “essentially, this is a battle over information – over who makes it, how you access it and who gets paid”.
The web has been through this before. In the early 2000s, as the information system we used to access the internet became a powerful business, we saw the rise of search-engine optimisation (SEO). Google was the search-engine giant and thousands of people were employed to create content that had the best chance of ranking highly in its search results. There was a deluge of content designed around “search terms” that SEO experts guessed would rank highly.
The businesses paying for this content didn’t care if it was bland and derivative. As long as it led potential customers to click links to their websites, it served its purpose.
The rise of smartphone apps saw developers try to divert attention from the web into walled gardens of content where they could capture our attention with advertisements. Apps are now a multibillion-dollar economy in their own right, but they haven’t replaced websites.
Today’s wave of AI-generated content is gathering pace as the tools become readily available. I could set up a website tomorrow and use prompts fed into ChatGPT to produce in minutes more articles than the Listener’s writers could pen in a month. But the results would make your eyes glaze over.
That hasn’t stopped such sites appearing in droves. NewsGuard, which tracks news credibility, has identified 217 “unreliable artificial intelligence-generated news websites” that have little or no human oversight. They are run largely by bots, mixing in a dose of misinformation with their “listicles” and self-improvement tips.
Still, they are attracting “programmatic” advertising, with big brands willingly paying for display ads on them, probably oblivious to the fact they are zombie sites.
Well-established online communities, from the web’s default crowd-sourced encyclopaedia Wikipedia to social-media platform Reddit, now face a dilemma. Should they let AI-generated content onto their platforms or invest more in the human creativity that their success is built on?
Google has long fought efforts to game its search algorithms. But when it comes to AI content, its focus is on “the quality of content rather than how content is produced”.
It is experimenting with an AI-powered search engine that displays auto-generated search blurbs, meaning we may no longer need to click through to a website for further information. That could profoundly change how we search the web, with pros and cons for people still involved in the art of creating content the old-fashioned human way.
As The Verge’s Vincent writes, “The new web is struggling to be born, and the decisions we make now will shape how it grows.”