Technology commentator Paul Brislen warns that people may be endangering their online security, for example.
He told NewstalkZB's Andrew Dickens that social media platforms are using the 10-year challenge to train their software algorithms to identify people from photos.
Personally, I have many doubts. As O'Neill herself acknowledged, Facebook already has a lot of photographs of us, many stretching back a very long time (check out this photo of Mark Zuckerberg from 2005) .The additional benefit that Facebook would get from this seems small. James Vincent at the Verge also noted that training facial recognition systems typically requires a much larger dataset to be effective.
Most importantly, I just think the reputational risk to Facebook would be gigantic. Every dot and comma of its privacy policies, its moderation guidelines and its consent dialogues is now under immense scrutiny. If it was found to have orchestrated the 10 year challenge, it would be quite a story. Facebook is also suffering more leaks than it ever has before, showing its employees aren't comfortable with everything it is doing. So the potential dangers are large, and the potential rewards relatively scant.
Facebook, indeed, denied the story firmly in a statement:
"This is a user-generated meme that went viral on its own. Facebook did not start this trend, and the meme uses photos that already exist on Facebook. Facebook gains nothing from this meme (besides reminding us of the questionable fashion trends of 2009). As a reminder, Facebook users can choose to turn facial recognition on or off at any time."
Still, the fact that the theory is so popular tells us a lot about how people view the social network. Obviously, its reputation is profoundly damaged. Earlier this week, responding to an unrelated story, the company's vice president of advertising, Rob Goldman, lamented that people so often "assume the worst about our technology and intentions". Well, get used to it. After two years of almost perpetual crisis β mostly the product of Facebook's own mistakes and lack of transparency β have eroded its benefit of the doubt. It has further tested the patience of journalists and politicians by evading tough questions and issuing careful denials which leave certain accusations subtly unaddressed.
But there is something else going on here, and I think it's about our discomfort with the way the whole internet now works. Laughing Men are an extreme case, and the ten-year challenge is a subtle one, but both are manifestations of the weird way in which technology, and particularly social media, can influence our behaviour in ways we find it hard to understand.
As O'Neill points out, the Cambridge Analytica scandal happened because researchers used a fun little personality quiz to goad people into giving them their personal data (and the data of all of their friends). Similarly, there is a lot of anxiety among cybersecurity experts that innocent-seeming games which spread quickly across the internet might actually be data-gathering campaigns.
"Your porn star name is the name of the street you grew up on and the name of your first pet," one such game might say. Those things just so happen to be among the common security questions used by many online services including banks to verify their users' identities. "What car did you learn to drive stick shift on?" asked one California tire shop. "What was the make and model of your first car" is a common security question. Nevertheless, people do answer these questions, often without fully thinking through the implications.
And that is a persistent pattern in the way we use social media. I've written before about the widespread psychological tricks and techniques that keep social media users hooked. It is usually good for tech firms' user engagement statistics if we act first and think later. Most social networks function as reward systems for posting interesting or controversial content β reward systems which some of the best minds of humanity are beavering every day to refine. As the writer Gurwinder Bhogal put it, "human life is gradually turning from a struggle against suffering into a struggle against pleasure."
The result is that we tweet things we later regret, get into fights we shouldn't get into, post photos we should have kept private and check our notifications when we are trying to concentrate on something else. Maybe, too, we participate in memes we will later have doubts about. Personally, I don't think putting old pictures of your face online is very dangerous, unless you are a spy or an investigative journalist. But it makes us anxious when we stop to realise that we don't really feel in control.
Psychological research suggests that people are attracted to conspiracy theories because they provide a sense of meaning and certainty in a chaotic, confusing world. They let us believe that disconnected events have a sensible explanation, and sometimes they flatter our sense of our own significance (at least the government is paying attention to me!).
In the same way, I think the ten year challenge theory is popular because people want to believe that this state of addiction, of not being in control, has a source and a culprit. Someone is responsible for orchestrating it; someone, at least, has power and knows what is going on. There is a refuge in cynicism, in believing that we've been beaten by a clever and deliberate plan, rather than by a combination of our own basic urges and a multinational corporation's structural financial incentives.
There is truth to this. The state of the internet is indeed the product of specific business decisions by specific people. Tech companies are very clever and in many cases they have designed their systems to have exactly this effect. Sean Parker, an important figure in Facebook's early figure, said it most colourfully when he said that he and his colleagues "exploited a vulnerability in human psychology" β that they "understood this consciously...and did it anyway".
On the other hand, there is evidence that even the tech titans are scared and confused by the world they have created. Mark Zuckerberg has blogged of his concern that the most sensational content on Facebook gets the most attention (though he frames this as a problem with humans more than as a problem with Facebook). The list of ex-Facebookers and ex-Googlers going public with their doubts gets longer every month. "Most of us have regrets right now," said Jaron Lanier, a scientist at Microsoft.
And when you think about the incentives faced by Silicon Valley workers themselves, that makes sense. Chris Eberle, a former Facebook employee, recently told me that it was very easy for people who work there to swallow their doubts and focus on what paid their wages. For many tech workers, owning shares in their company is a key source of compensation. The price of those shares is directly tied to their company's profits, which is directly tied to their user engagement statistics.
Certainly their compensation and benefits are very generous, so we need not see them as victims. Still, if they are just maximising engagement in order to maximise their compensation, at what point should we say that they, too, are trapped inside a system whose end results make them deeply uncomfortable?