Elon Musk famously called it "our greatest existential threat".
Physicist Stephen Hawking said that, limited by slow biological evolution, humans wouldn't be able to compete and would be superseded.
But the technology that sparked those fears - artificial intelligence - is also being touted as the biggest potential advance in our history.
A recent international study found that 50 per cent of experts questioned believe that artificial intelligence - or AI - will be smarter than humans within the next 24 years. And 90 per cent of those surveyed believed that milestone would be reached within 60 years.
While smarter machines hold out the promise of more efficient solutions to many problems, they also raise concerns about industries being disrupted and jobs vanishing, as well as legal and ethical issues.
In the past 10 years, investment in AI has increased exponentially, with countries spending billions on the technology. Now a report from the Institute of Directors and Chapman Tripp, released today, says New Zealand should set up an industry body to prepare for the effects AI will have on the economy, work, education, and welfare.
"While the impact of AI on the New Zealand economy is unquantifiable, many sectors should be investing more in AI technologies to make the most of their full potential," says Institute of Directors chief executive Simon Arcus.
"This extends from start-ups, to SMEs and corporates to government agencies and educational institutes.
"AI is an extraordinary challenge for our future," he says. "Establishing a government led high-level working group is critical in helping New Zealand rise to that challenge."
But first there's a more basic challenge: defining exactly what AI is.
Explanations vary, from machinery that is able to perform tasks usually requiring human intelligence, through to the "Turing test", developed by the pioneering British scientist Alan Turing in 1950.
Turing suggested a machine could pass the test if it could convince an evaluator, over a text-only channel, that it was human.
That test may already have been passed: in 2014, a computer successfully imitated a 13-year-old boy in a text conversation, fooling a number of judges.
Arcus says that while AI may make most people think of humanoid robots and cyborgs, the reality is that the technology is already used every day. Targeted advertising, or customised music and movie suggestions from the likes of Netflix, Spotify and Apple Music are all examples of the way AI is already at work.
"Look at [Apple's] Siri, look at [Amazon's] Alexa, but also look at things like the technology that detects unusual activity on your credit card," Arcus says.
"Fraud detection is an algorithm that understands the way you spend and can pick up big anomalies in your spending; it's not a human doing that and we know that's only going to get better," he says.
"We are living with AI and getting major benefits from it."
Automated or artificially intelligent machines may be more accurate than humans under some circumstances, but there is still significant margin for error and artificial intelligence can result in unpredictable behaviour.
Perhaps most notable is the "flash crash" of 2010, when algorithmic trading systems caused stock markets to crash, shedding US$1 trillion of their value, then rebound again in just 36 minutes.
Another incident this year involved the first fatality in a self-driving car, when a Tesla Model S failed to avoid a collision with a truck in Florida, killing its owner.
The potential economic and social opportunities from AI technologies are immense.
While Tesla may have taken responsibility for the fault, and beefed up its collision-avoidance technology, the issue of liability is only going to increase as machinery becomes more automated, and what Arcus says is a lack of clear legislation.
"Keeping up with those sorts of changes is becoming an increasing challenge," Arcus says. "I think there is obviously going to be some need for collective global thinking on these things at some point, because we have to understand things like liability around self-driving cars for example."
Today's report also highlights the impact AI will have on workplace safety regimes and the liability attached to them. As AI systems become increasingly autonomous, employment, and health and safety legislation will need to be clear about the responsibilities and liabilities of directors and organisations it says.
Whether or not governments can keep up, AI is only going to accelerate, as investment in the sector increases. Global tech giants have been joining the race to buy up start-up AI businesses in an effort to lead the pack in what could be one of the most lucrative industries of the future.
Globally, hundreds of companies are working to advance the field of AI and this year alone there have been more than 40 acquisitions of AI companies, with corporate giants including Google, IBM, Microsoft, Facebook, Apple and Salesforce competing for market share.
"It's often compared to the Wild West - it has that oil rush feeling," Arcus says.
"Those existing large entities like Facebook and Google have capital power that some of the other businesses don't have to merge and grow through just buying things," he says.
"And I think that's another issue there that we'll have to look closely at, is the control of this power and how much of a monopoly these companies have."
The public and private sectors must move promptly and together to ensure we are prepared to reap the benefits, and address the risks of AI.
Industry and job disruption is also a concern highlighted in the report, with Arcus stressing the need for a cross-society group to analyse the opportunities and potential risks.
"We need to grasp the nettle when it comes to government and private sector-led understanding of AI," Arcus says. "If you look at Canada, Japan, Germany and the United States, they are already grappling with this stuff, so the second part of that question is how will New Zealand fare.
"I am hugely optimistic that New Zealand has always been able to deal with change and is very agile when it comes to these sorts of things and very innovative at making sure that we get the best of it."
The report closes by calling for greater collaboration and co-ordination among AI researchers, policymakers and industry.
"The potential economic and social opportunities from AI technologies are immense," it declares. "The public and private sectors must move promptly and together to ensure we are prepared to reap the benefits, and address the risks of AI."
What is artificial intelligence? There is no one definition for AI and this has been a point of contention over the years as industry leaders and scientists attempt to qualify what does and doesn't make the cut.
Explanations vary from machinery that is able to perform tasks usually requiring human intelligence, through to the "Turing test", developed by the pioneering British scientist Alan Turing in 1950.
Turing suggested a machine could pass the test if it could convince an evaluator, over a text-only channel, that it was human. In 2014, a computer successfully imitated a 13-year-old boy in a text conversation, fooling a number of judges and effectively passing the test.
As a schoolboy in Rotorua, Shane Legg spent much of his spare time researching artificial intelligence - an interest that led to him co-founding a company that was bought by Google in 2014 for a rumoured $1 billion.
Artificial Intelligence (AI) - the sort of intelligence seen in machines or software - was not well known the time, so Legg researched it any way he could.
"I mostly taught myself from a variety of sources," Legg said last year. "I remember an article in the Encyclopaedia Britannica on something called 'Alpha-Beta Search' at the Rotorua public library. I figured it wouldn't be too hard to build a chess-playing program based on this algorithm, so that's what I did."
After completing degrees in maths, statistics, economics and computer science at the University of Waikato and University of Auckland, Legg decided to pursue AI as a career. He did a PhD in Switzerland, then postdoctoral work at University College London before meeting neuroscientist and former teenage chess prodigy Demis Hassabis and former video game designer Mustafa Suleyman. In 2010 they founded DeepMind Technologies.
I figured it wouldn't be too hard to build a chess-playing program based on this algorithm, so that's what I did.
Two years after the company had raised its first round of funding and hired a team of researchers, some of the areas in which they were working became commercially important and Google came knocking - eventually convincing Legg and the team to sell.
DeepMind's technology aimed to mimic human thought processes and the company attracted early investment from the likes of Tesla Motors chief executive Elon Musk and entrepreneur Peter Thiel.
One of DeepMind's biggest breakthroughs was creating a system that could teach itself to play a wide range of computer games including Space Invaders and Pong. The company made headlines this year when its AlphaGo program beat a human professional player at the traditional Chinese game of Go.
The company has also been involved in healthcare. One venture includes a system to analyse eye scans, looking for early signs of diseases that lead to blindness. In August it announced it was working with University College London Hospital, looking for ways to automatically differentiate between healthy and cancerous tissue.