She said new technology would make social media safer for users and help stop the "unintentional viewing of extremist content like so many people in New Zealand did after the attack, including myself, when it auto-played in Facebook feeds."
Rosen said the policy before today's changes meant that users were blocked from Facebook if they kept violating the Community Standards, such as using terror propaganda in a profile picture or sharing images of child exploitation.
"We will now apply a 'one strike' policy to [Facebook] Live in connection with a broader range of offences," Rosen said in a statement.
"From now on, anyone who violates our most serious policies will be restricted from using Live for set periods of time - for example 30 days - starting on their first offense.
"People who have broken certain rules on Facebook – including our Dangerous Organisations and Individuals policy – will be restricted from using Facebook Live."
The policy includes people who are involved in or support terrorist activity, organised hate, mass or serial murder, human trafficking, or organised violence or criminal activity.
"For example, someone who shares a link to a statement from a terrorist group with no context will now be immediately blocked from using Live for a set period of time," Rosen said.
Facebook has come under intense criticism for its handling of the gunman's video on its platform. It took 12 minutes after the livestream ended before it became aware of it, and that notice came from police, not from its own algorithms or human moderators.
There were 1.5 million attempted uploads of the gunman's video within 24 hours of his livestream, and its AI technology automatically blocked 1.2 million of those uploads.
Users wanting to share the video changed aspects of the footage to side-step AI detection; Facebook said there were 900 different variations of the footage.
Rosen said the research money would go to partnerships with The University of Maryland, Cornell University and The University of California Berkeley for:
• Detecting manipulated media across images, video and audio
• Distinguishing between unwitting posters and adversaries who intentionally manipulate videos and photographs
In a nod to what Facebook is expected to sign up for in the Christchurch Call To Action, Rosen said this was only the beginning.
"In the months to come, we will pursue additional collaboration so we can all move as quickly as possible to innovate in the face of this threat."
Part of the ongoing effort would look at videos depicting events that never happened, he said.
In response to the March 15 terror attack, Facebook has already banned white nationalist and white separatist content, and recently removed controversial figures for promoting violence or hate including Alex Jones, Milo Yiannopoulos and Laura Loomer.
This move was portrayed as an effort to tackle hate speech before it could erupt into something more destructive.
While Facebook chief operating officer Sheryl Sandberg said livestreaming safeguards would be explored, chief executive Mark Zuckerberg has already said that putting a delay on livestreams would fundamentally break the service.
Neither Sandberg nor Zuckerberg will be at the Christchurch Call summit. Facebook will be represented by former UK Deputy Prime Minister Nick Clegg, its vice president of global policy and communications.