Reliant on a binary system, computers struggle with the sort of subjective judgments humans make every day. So what if someone could interfere with their perception, or even their decision-making?
This isn't some wild, futuristic problem. The cars on our driveways are getting smarter all the time, and increasingly connect to the internet in order to download over-the-air updates.
Companies including Tesla and Cadillac have built technology capable of changing lanes and keeping distance with the traffic around. Traffic lights and cars are beginning to "talk" to each other via the internet.
John Chen, Blackberry chief executive, who has shifted his company to provide software designed to protect fleets of self-driving cars, has said they have the potential to be "fully loaded weapons".
If traffic lights were all connected to a wider system via the internet, a hacker could wreak havoc, he argues.
"People could intentionally make everything crash," he says. Of course, Chen has a vested interest in making such a claim – his company relies on business from car companies who entrust it with the responsibility to protect their systems.
Are his concerns realistic? And how are car companies dealing with the risk? In many American cities, early versions of self-driving cars are already on the roads, being improved and developed by companies racing to become the first to perfect this potentially lucrative technology.
CES, held annually in Las Vegas, has slowly become a motor show, and it's not just BlackBerry that sees an opportunity here. The French government-funded Alternative Energies and Atomic Energy Commission (CEA) has also developed a system it says can block attacks which target the system's decisions using sound and light.
"Hackers are getting better every day, and it gradually becomes possible to produce attacks using disturbances that are simultaneously difficult to detect for humans and effective in various real world contexts, e.g. variations of the scene luminosity or sound reverberation in the environment," the Commission said.
These types of attacks have received a lot of attention, including GPS and Lidar "spoofing", where a car is tricked into thinking the road is laid out differently than in reality, or into "seeing" things that aren't actually there.
Dave Butler, Uber's head of platform security, points out that the real world can hold such tricks, too.
"For example, someone can change a stop sign, or take down a stop sign, or like my neighbours, let the trees grow in front of the stop sign anyway," he says.
The company says it has protected its mapping software with the most sophisticated security it has, using encryption.
"For certain classes of attacks to be fully effective, they would have to not only adjust the real world, they would have to adjust all of the mapping and processes in the past, which is cryptographically speaking very difficult."
More worrying is a car company's long supply chain.
"That is a long, deep surface area where the vulnerability potentially was introduced, long before it was a self driving car," he says.
At the security conference Black Hat, hackers showed this by hacking a Tesla, one of the most high-tech cars in existence.
"It is always the same thing. It's always a weakness, it's a vulnerability, it's on some old software, and the truism is there are extremely bright people through either employment or a lack of other hobbies that have time on their hands. And they will be able to find and exploit those weaknesses," says Butler.
Uber tries to deal with this by controlling access to the parts, but it's also a case of limiting the day-to-day connectivity of the cars. San Francisco-based company Cruise doesn't allow its vehicles to receive inbound connections, and they don't have Bluetooth or Wifi.
"By reducing the amount of attackable code, we make the vehicle more secure and eliminate concerns about the inherent problems of code dealing with potentially malicious inbound data," its information security researcher Charlie Miller, a prominent "white hat" hacker, wrote in a blog post last year.
Miller says the company's concern is mostly about long-distance internet hacks, where a malicious person takes control of the car from afar.
Dmitry Polishchuk, head of self-driving cars at Yandex, which is showing off its driverless cars at CES, insists the fears are "overblown".
"There are many other problems that the industry needs to deal with first," he says, including actually making the cars work properly.
He argues that the self-contained nature of Yandex's cars – each makes its driving decisions on board, without consulting an outside computer or network, means the only real risk is from someone who has physical access to the car, and points out that the Tesla hacks required the researchers to have this access, because it relied on connecting the car to a malicious WiFi network.
Yandex, a Google-scale giant with a search engine, email and app-based taxi services, has faced pressure from the Russian government to share encryption keys from other parts of its business. How would it deal with a similar request for its rideshare data? Polishchuk sees these requests as an inevitability.
"It's obvious. Any government – Russian, Chinese, US government – will request data."
Not just data – law enforcement will require control. In the future, he predicts that police will have the ability to stop a self-driving car remotely.
Yandex protests that being Russian makes no difference, and it's true that other governments are also flexing their data-collection muscles and demanding more information from the transport companies that have previously had a free ride.
Uber is currently embroiled in a row with Los Angeles local government over data sharing, and in October had its license revoked in the city after refusing to share real-time data with local authorities.
How worrying this is depends on how much you trust your government to be responsible with your data. But if you are concerned about privacy, it's not just anonymous criminal hackers you need to think about.