The writer and internet activist Cory Doctorow has typed out millions of words in his half-century on the planet. None will ever be as resonant as one he coined himself: “enshittification”.
Enshittification – also known more politely as “platform decay” – describes the trajectory of new services and platforms on the internet.
In a 2023 missive on the topic, Doctorow wrote that such platforms “are good to their users; then they abuse their users to make things better for their business customers; finally, they abuse those business customers to claw back all the value for themselves. Then, they die.”
The swathe of user-hostile changes under Elon Musk’s ownership of X probably warrants its own category, but you’ve probably noticed a more gradual process if you’re a Facebook user.
Even if you have the time and motivation for constantly weeding through preferences, your “news” feed these days is likely full of things you didn’t ask to see and don’t necessarily want to see, obvious scams included. They’re there to suit Facebook’s owner, Meta, not you.
The ostensible purpose of Facebook – keeping in touch with friends, family and communities – hangs on by a thread. But still you don’t leave, because there are friends and family there and those connections would be hard to replicate.
When my annual subscription to the AI-powered transcription service Otter rolled over a year ago, I failed to notice that the terms of my “Pro” account had been radically degraded. I was on an urgent deadline when I eventually discovered that the number of files I could upload in a month had been cut from “unlimited” to 10. While they were laying waste to the entire reason I was sending them money, Otter’s owners were stuffing their service with crap I didn’t want or need: irrelevant AI-generated text summaries and business process doodads. I had to fume for months before I could angrily cancel my sub.
Former cabinet minister Richard Prebble recently wrote a fanciful Herald column promising AI could “solve the GP shortage by us all having our own AI doctor”, and that AI avatars could provide individual tutoring for every child in school. The truth is that AI as we know it – large language model (LLM) services such as ChatGPT – is largely being used not to save the world, but as the most base commodity, to the point where we should be concerned about the enshittification of the internet itself.
Researchers at the Amazon Web Services AI lab recently found that 57% of all sentences on the internet have been translated into two or more languages. Which might not sound so terrible, until you realise that the sheer scale of this phenomenon indicates these other-language sentences are the product of machine translation. They’re trash. And the English-language sentences they’re created from? Most likely AI-generated in the first place.
Such LLM services work by absorbing the massed text output of millions of humans, parsing it into codes called tokens, then using those tokens to analyse and replicate its patterns. But what happens when more than half of what they’re gobbling up is already AI-generated commercial slop? You might not want that to be your doctor.
The financial fragility of the AI boom was underlined recently when news of the Chinese DeepSeek – an LLM trained cheaply on AI-generated text, which was generated from other LLMs trained on human content taken without permission – sent US tech stocks plummeting.
But many billions in venture capital have been sunk by now, and the companies that have spent that money need to at least look like they’ve invested profitably.
That’s why Meta is now touting AI chatbots that impersonate humans. What we, the customers, think about it could hardly matter less.