Twitter admitted in a 2014 diversity report that it has "a lot of work to do". Photo: Thinkstock.
Everyone knows that Twitter has a serious, unsolved problem with abuse and harassment on its platform. I know it. Twitter knows it. If you're a frequent Twitter-user - particularly a female - you likely also know it, too, because according to the Pew Research Center, more than seven in 10 Internet users has witnessed online abuse.
Frustratingly, however, none of this mounting awareness seems to have resulted in substantive change. Yes, Twitter has recently streamlined its abuse reporting process and softened some of the language in its abuse policies. But after each and every reform rolls out, there's usually a backlash from activists and victims' advocacy groups, complaining that the new protections are too weak, or too cosmetic, or too easy to evade.
As if to further prove the point, a recent analysis of Twitter abuse by the group Women, Action and the Media! found that even when reports are vetted by a third-party group, Twitter takes action on only 55 percent of reported abuse.
Inspired by that unending cycle, we turned to a group of experts - victims, advocates and academics - to ask what concrete, anti-harassment tools they would introduce. The responses varied widely, from tweaks to Twitter's fundamental user-interface to entire about-faces in policy. Put them together, however, and you have a pretty clear vision of what a troll-free Twitter could be.
In March, Twitter began beta-testing an anti-harassment tool it's calling "the quality filter": a net that snags threatening, abusive and spammy tweets, much like an email spam filter does. The tool, as we reported then, works really well. But it's only available to verified users - less than 1 per cent of Twitter's user base - and then only on iPhone.
That's unfortunate, says Soraya Chemaly, the founder of the Safety and Free Speech Coalition, because the quality filter is basically the only thing standing between victims and a technique called "dogpiling": "where dozens, hundreds or even sometimes thousands of Twitter users converge on one target to overwhelm their mentions and notifications."
Twitter hasn't commented on when, or even whether, the feature will roll out more widely.
2. Let users batch-report abuse.
Twitter's abuse-reporting process has improved immeasurably in the past six months: The form now takes roughly 30 seconds to fill out, where it used to take upwards of three to five minutes before. But despite those improvements, Twitter still requires that you fill out a discrete report for every abusive tweet you receive. And when you're getting a lot of abuse, that's basically impossible.
At the Theorizing the Web Conference in New York last month, the interaction designer Caroline Sinders characterized this not as a policy problem, but as an interface problem. Sinders, of all people, would probably know: In addition to being an artist and a frequent Twitter user, she's a user researcher for IBM Watson.
Twitter built its reporting tools with the expectation that users would only flag one tweet at a time, Sinders explained. But since we now know that many abusers either continue their behaviour over time, or as part of a coordinated group, Sinders suggests Twitter empower users to report batches of tweets at once, which would give moderators more information with which to make a decision.
In the interest of understanding abuse context, some advocates have proposed another idea, too: Invite users to tell you if this incident is part of a pattern of abuse.
3. Lower the bar for what constitutes "harassment" on Twitter.
I'm not allowed to tweet that I'm going to kill you. But if I message you once a day, every day, about your eventual death, that's technically cool.
According to Twitter's current abuse policies, the site only counts something as "harassment" if it contains an actual threat or "promotion" of violence - a standard so high, it approaches the legal bar for prosecuting harassment. Under this policy, I'm not allowed to tweet that I'm going to kill you. But if I message you once a day, every day, about your eventual death, that's technically cool.
"For women, particularly those being stalked, slut-shamed or are involved in intimate partner violence situation, the problem isn't 'imminent violence,'" Chemaly said. Instead, it's "hypervigilance to pervasive threat, a metric that changes expectations of 'safety.'"
Trolls (on any platform!) are frequently one step ahead of moderators - and the case of the tweet-and-deleters is much the same. Because they realize that their abusive tweets could be spotted, and punished, by Twitter's abuse team, these clever people have taken to tweeting abusive and/or threatening messages, waiting just long enough for the victim to see them, and then deleting the evidence.
Since Twitter doesn't currently accept screenshots as evidence of abuse, and because the site's data retention policy doesn't make deleted tweets available internally, there's nothing stopping abusers from repeatedly breaking out this technique. That's pretty alarming, when you consider that tweet-and-deleters are often exposing their victims' personal information; the Women, Action and Media! report concludes that's why Twitter often fails to respond to doxing cases.
5. Allow users to customize their privacy settings.
I would like the option to make myself inaccessible to my harassers, as far as that's possible.
Twitter's privacy settings are pretty blunt, more like Instagram than Facebook: If your account is private, no one sees it but the users you specifically approve; if it's public, everyone sees every tweet - and anyone can tweet at you. Twitter does have two tools - blocking and muting - which hide your feed from an offender and hide his tweets from you, respectively. But Lindy West, a feminist writer whose January piece on Twitter trolling prompted CEO Dick Costolo to admit the site had a problem, says that's not enough.
"I'd also really like, if I block someone, for them to actually be blocked from ... tweeting at me," she said. "I would like the option to make myself inaccessible to my harassers, as far as that's possible."
Sinders, the interaction designer, draws inspiration from the crowd-sourced anti-harassment tool Block Together, which gives users wide-ranging control over the types of accounts that follow them. Why, for instance, can't users filter out messages from accounts that don't meet a certain threshold of age or followers?
6. Transparently explain why abuse reports are rejected.
According to WAM's statistics, Twitter chooses not to act on roughly 45 per cent of the legitimate abuse reports it receives. When Twitter doesn't act on a report, the person who filed it gets a form email that basically just says "we've investigated the account and found that it's not violating Twitter Rules" - which, as you can imagine, is profoundly frustrating.
These messages could go further, advocates say, to explain Twitter's standards and to help legitimate victims appeal. For instance, instead of saying, vaguely, that "this doesn't violate the rules," maybe an abuse email says "you haven't shown this person was targeting you." It's a labour-intensive change for Twitter, but one that could help victims re-file their reports.
7. Connect victims to support resources.
Facebook has set the industry standard for trauma-response on social networks, directing people who may be suicidal, for instance, to help lines and advice from experts. But Twitter hasn't taken the same steps: In both its communications with abuse victims and its stated policies, Twitter only advises people facing online harassment to consider contacting law enforcement.
"Twitter should acknowledge the potential trauma that targets may experience," Chemaly said. "Additionally, connecting users to support resources would go a long way in offering acknowledgement and validation."
Ultimately, West argues, Twitter's problem is a cultural one: 90 percent of the company's U.S. employees are men, which she says means "there's an experience gap, and an empathy gap," that prevents them from adequately addressing the needs of women. Twitter is aware of that problem: In January 2014, when the site published its first diversity report, it admitted that it had "a lot of work to do" and vowed more support both for its own female and minority employees, and also for outside pipeline programs like Girls Who Code.
Of course, culture shifts take time: Twitter published its diversity numbers 16 months ago, and it isn't clear if they've substantially changed since then. In the meantime, Chemaly says, all of the site's moderators - male and female - should get serious training in gender violence.