Facebook had introduced counter-terror algorithms nearly two years ago, but its systems failed to detect and stop the livestream being shared on its platform.
Brian Fishman, Facebook's policy director for counter-terrorism, told lawmakers the livestream was not gruesome enough to trigger its filters, the Daily Beast reports.
Representatives from four social media companies had gathered with members and staff of the House Homeland Security Committee for a briefing on March 27.
Executives from Facebook, Google, Twitter and Microsoft reportedly attended.
Facebook has been savaged for allowing the livestream to continue to be shared and viewed around the world.
The committee members had challenged Fishman on his claims, reported the Daily Beast, which is an American news and opinion website.
One Congress member said the video was so violent, it looked like footage from the video game Call of Duty.
The 17-minute livestream, filmed on the accused gunman's helmet camera, shows the accused gunman driving to the Christchurch mosque and the shooting inside.
The video was uploaded to Facebook more than 1.5 million times.
Privacy Commissioner John Edwards has castigated Facebook over its lax handling of the Christchurch livestream.
"Facebook cannot be trusted. They are morally bankrupt pathological liars who enable genocide (Myanmar), facilitate foreign undermining of democratic institutions," he said in a tweet he later deleted.
"[They] allow the live streaming of suicides, rapes, and murders, continue to host and publish the mosque attack video, allow advertisers to target 'Jew haters' and other hateful market segments, and refuse to accept any responsibility for any content or harm. They #DontGiveAZuck."
The tweets made news around the world, as did his early comment that Facebook's initial silence was "an insult to our grief".
Spokespersons for Facebook and for the committee declined to comment because the meeting was behind closed doors, the Daily Beast said.
Facebook has previously addressed criticism of its artificial intelligence systems by saying it is based on "training data, which means you need many thousands of examples of content in order to train a system that can detect certain types of text, imagery or video".
"This approach has worked very well for areas such as nudity, terrorist propaganda and also graphic violence where there is a large number of examples we can use to train our systems.
"However, this particular video did not trigger our automatic detection systems. To achieve that we will need to provide our systems with large volumes of data of this specific kind of content, something which is difficult as these events are thankfully rare."