For now, these are hypothetical questions. Two senior Pentagon officials, who spoke to The Times on background because much of their work on artificial intelligence is classified, say the United States is "not even close" to fielding a completely autonomous weapon.
But three years ago, Azerbaijani forces used what appeared to be an Israeli-made kamikaze drone called a Harop to blow up a bus carrying Armenian soldiers. The drone can automatically fly to a site, find a target, dive down and detonate, according to the manufacturer. For now, it is designed to have human controllers who can stop it.
Not long after that in California, the Pentagon's Strategic Capabilities Office tested 103 unarmed Perdix drones which, on their own, were able to swarm around a target. "They are a collective organism, sharing one distributed brain for decision-making and adapting to each other like swarms in nature," the office's director at the time, William Roper, said in a Defense Department statement.
The Estonian-made Tracked Hybrid Modular Infantry System looks like a mini-tank with guns, camera and the ability to track down and fire on a target. It's human controlled, but its maker says it's "autonomous and self-driving with growing AI assistance."
"It is well within the means of most capable militaries to build at least relatively crude and simple autonomous weapons today," Scharre said in an email exchange after his speech.
As the ability of systems to act autonomously increases, those who study the dangers of such weapons, including the UN Group of Governmental Experts, fear that military planners may be tempted to eliminate human controls altogether. A treaty has been proposed to prohibit these self-directed lethal weapons, but it's gotten limited support.
The proposed ban competes with the growing acceptance of this technology, with at least 30 countries having automated air and missile defense systems that can identify approaching threats and attack them on their own, unless a human supervisor stops the response.
The Times of Israel has reported that an Israeli armed robotic vehicle called the Guardium has been used on the Gaza border. The US Navy has tested and retired an aircraft that could autonomously take off and land on an aircraft carrier and refuel in midair.
Britain, France, Russia, China and Israel are also said to be developing experimental autonomous stealth combat drones to operate in an enemy's heavily defended airspace.
The speed with which the technology is advancing raises fears of an autonomous weapons arms race with China and Russia, making it more urgent that nations work together to establish controls so humans never completely surrender life-and-death choices in combat to machines.
The senior Pentagon officials who spoke to The Times say critics are unduly alarmed and insist the military will act responsibly.
"Free-will robots is not where any of us are thinking about this right now," one official told me.
What they are exploring is using artificial intelligence to enable weapons to attack more quickly and accurately, provide more information about chaotic battlefields and give early warning of attacks. Rather than increase the risk of civilian casualties, such advances could reduce such deaths by overcoming human error, officials say.
The United States, for instance, is exploring using swarms of autonomous small boats to repulse threats to larger Navy ships. Yet Pentagon officials say US commanders would never accept fully autonomous systems, because it would mean surrendering the intelligence and experience of highly trained officers to machines.
While artificial intelligence has proved a powerful tool in numerous fields, it has serious vulnerabilities, like computer hacks, data breaches and the possibility that humans could lose control of the algorithm.
Only 28 countries have supported the call for a treaty banning such weapons, which the Campaign to Stop Killer Robots, an international coalition of more than 100 nongovernmental organizations, began working for in 2012. In March, UN Secretary-General António Guterres said that "machines with the power and discretion to take lives without human involvement are politically unacceptable, morally repugnant and should be prohibited by international law." He called on a UN group to develop means, by treaty, political pressure or strict guidelines, to make that happen.
That work is futile unless the United States and other major powers lead the way.
Written by: Carol Giacomo
© 2019 THE NEW YORK TIMES