Through image matching, AI can keep certain images and videos that have been flagged from being uploaded again.
The company is also building an algorithm that aims to analyze written text to keep terrorism-related language off the platform.
But Facebook acknowledged that human expertise is key to its new measures.
"AI allows us to remove the black-and-white cases very, very quickly," said Brian Fishman, the lead policy manager for counterterrorism at Facebook.
But he added that human experts are better at analyzing the context of a post, and in grappling with the evolving methods used to bypass Facebook's counterterrorism measures.
Systems to block accounts of terrorists across the flagship social network and its sister app, WhatsApp, are also being developed.
Facebook declined to say what types of customer data will be shared between its apps, but said that cross-platform systems being developed for counterterrorism purposes are separate from its commercial data sharing.
In recent years, Facebook has been criticized for not doing enough to combat propaganda and extremist content online.
After the terrorist attack in London this month, British Prime Minister Theresa May attacked Web companies for providing a "safe space" for people with violent ideologies.
Under pressure from governments around the world, the tech industry has responded to this type of criticism before.
Facebook, Twitter, Google and Microsoft said they would begin sharing unique digital fingerprints of flagged images and video, to keep them from resurfacing on different online platforms.
In a separate post Thursday morning, Facebook said it will be seeking public feedback and sharing its own thinking on thorny issues, including the definition of fake news, the removal of controversial content, and what to do with a person's online identity when they die.