Twitter, under pressure from governments around the world to combat online extremism, said that improving automation tools are helping block accounts that promote terrorism and violence.
In the first half of the year, Twitter said it suspended nearly 300,000 accounts globally linked to terrorism. Of those, roughly 95 per cent were identified by the company's spam-fighting automation tools. Meanwhile, the social network said government data requests continued to increase, and that it provided authorities with data on roughly 3,900 accounts from January to June.
The increasing role of machines in fighting extremism is a function of necessity, with manually identifying violent material within the millions of messages sent every day an impossible task. Twitter currently has around 328 million users, with monthly active users in the US around 68 million.
Twitter, along with Facebook and YouTube, are instead building automation tools that quickly spot troublesome content. Facebook has roughly 7,500 people who screen for troublesome videos and posts.
It's also funded groups that produce anti-extremism content that's circulated on the social network.