A Google spokesperson said the autocomplete predictions were "algorithmically generated based on users' search activity and interests".
"We do our best to prevent offensive terms, like porn and hate speech, from appearing, but we don't always get it right. Autocomplete isn't an exact science and we're always working to improve our algorithms."
Tech experts in New Zealand added Google itself wasn't to blame for the suggested search.
Peter Griffin, technology commentator, aslo said the suggestions in Google were based on the most popular previous searches from Google users.
"I don't see it as a failing on Google's part," Griffin said.
"It is a reflection on us, on society, because for those pre-populated phrases to appear, a hell of a lot of people needed to search using it."
Griffin said Google was not actually sending people to videos of the mosque shooting as the video cannot be found via a Google search.
"Instead, when you follow one of those phrases, you come to news stories about the mosque shootings and official information from the New Zealand Government about it."
Associate Professor Dave Parry, head of department of computer science at AUT, said the suggested results might indicate people were still searching for the mosque shooting on a global scale.
"Worldwide searches may be overwhelming anything local. Google is likely to be going off the same bulk searches rather than localising."
Parry said while the search results could well be distressing for some, it is the lowest level of failure for Google.
"One of the issues of the whole model is automating," said Parry, "this is the price to pay for convenience."
Last month attendees at the Christchurch Call to Action in Paris agreed to eliminate terrorist and violent extremist content online.
Seventeen countries, the European Commission, and eight major tech companies including Google signed up to the accord.
Tech companies Microsoft, Twitter, Facebook, Google and Amazon said they would set out concrete steps to address the abuse of technology to spread terrorist content.
Steps included banning terrorist and violent extremist content, establishing ways for users to flag such content in a way that would prioritise it for prompt action, investing in technology that improved capability to detect and remove terrorist and violent extremist content online and identifying "appropriate checks" on live streaming, aimed at reducing the risk of terrorist and violent extremist content being shared online.