Google is now training its computers to recognise videos that are offensive by analysing them on a frame-by-frame basis and comparing them with descriptions of content.
It said it hoped to teach its computers to understand the context of footage, so that videos that are easily recognised by people as offensive will be flagged by its computers.
Schindler said: "Computers have a much harder time understanding context, and that's why we're actually using all of our latest and greatest machine learning abilities now to get a better feel for this."
The company hopes that its programme will be able to identify the difference between a movie star waving a gun and an extremist. However, the company said that it will never be able to solve the problem completely as 400 hours of new content are uploaded to YouTube every minute.
"No system can be 100 per cent perfect," he said. "But we're working as hard as we can to make it as safe as possible."
More than 250 organisations in the UK including the British Government, Toyota, Tesco and McDonald's stopped UK advertising on YouTube after it emerged that they were being promoted on videos posted by hate preachers, rape apologists and extremists banned in Britain.
Google has changed which videos can carry advertising and given them the power to fine-tune the types of content that they want to approve.
Last week Amber Rudd, the Home Secretary, summoned executives from Google, Twitter, Facebook and Microsoft to a summit at the Home Office after the Westminster terrorist attacks.
The companies agreed to create new "technical tools to identify and remove terrorist propaganda".
They also agreed to look at "options" for a body "to accelerate" how they take down extremist content.
However they were criticised for paying "lip service" to the problem by failing to commit fully to setting up an industry body to tackle the problem.
This story originally appeared on the Daily Telegraph here.