You might even have used the anti-virus application from the global company he helped found, F-Secure, which was called Datafellows at first, a name that didn't really describe what it did.
The first mentions of Mikko Hyppönen in the New Zealand Herald go back to 2000. He's up there with Eugene Kaspersky, Peter Norton (who actually didn't develop an antivirus) and the now deceased John McAfee, who did.
Hyppönen's Law
If It's Smart, It's Vulnerable refers to "Hyppönen's Law". This states that adding functionality and communications capabilities to devices means they become vulnerable - to being hacked, that is. As Hyppönen points out, you can't hack a traditional wind-up wristwatch, but a smartwatch is a little computer that runs code and is online, which means it can be hacked.
The same goes for modern cars and industrial control systems, which should make everyone sit up.
That long experience from the very beginning of the PC revolution means there is some real gold in the book.
Hyppönen is candid in his stories, and recounts several facepalm-inducing mistakes, very familiar to older-than-the-hills people like yours truly, who learnt things the hard and embarrassing way.
It's not just InfoSec geeks and developers who will enjoy If It's Smart, It's Vulnerable: everyone uses IT nowadays, and Hyppönen's book provides a great summary of how networked computing turned into what it is today, a monster that's unlike anything humanity has ever seen before.
Last time I spoke to Hyppönen, he called the Internet the biggest social experiment ever conducted, and that couldn't be more true.
In the book, there's good thinking around ethics and how people can't be reprogrammed to act in the desired way, because that's just how we are.
No matter how much awareness training users receive, they will still do the human thing like opening email attachments from strangers and making everything go kaboom.
The book isn't without flaws that could've been debugged. Sharper editing could've brought together the same topics that are discussed several chapters apart; also, some of the chapters are brief non-sequiturs that needed more development.
The parts on blockchain, bitcoin, non-fungible tokens and crypto-currencies in general feel under-done, with, in my opinion, not enough emphasis on how they are speculative instruments, soaked in crime, and of little or no use for everyday transactions.
Reading If It's Smart, It's Vulnerable made me realise the problem with the "security is a process" expression that InfoSec people like to utter.
That means you can't just fix something and go back to sleeping with both eyes shut.
Rather, you have to constantly pay attention to what's going on. Which, as any geek worth their salt knows, can be really difficult.
Nevertheless, the hard work InfoSec people have put in over the years has paid off in some ways.
There are no computer viruses (as such) anymore. Opening Microsoft Word documents isn't quite the Russian Roulette it used to be, with horrible macro-viruses being triggered, causing the IT department sleepless nights as it battles devastating infections.
Instead, we have ransomware from criminal organisations which operate affiliate programmes where wannabe hackers can rent malware for extortion. We have large scale state-sponsored hacking, espionage and sabotage operations, and surveillance capitalism, in which tech giants know all about you and sell that information to the highest bidder.
That's much worse, because it's not like any of us can avoid using computers or connecting to the Internet. There's more of the same on the horizon, as artificial intelligence is likely to become more clever than humans for several tasks, but luckily, Hyppönen and other experts don't think it means the machines will try to kill us.
Ideally though, we should try to ditch some of the dystopian aspects of networked computing mentioned above, but how? The book touches on some of that, like following Apple's example of not letting code run wild on iPhones and iPads, and taking a hard line on privacy and security.
Whether that'll be enough is anyone's guess, but maybe, just maybe, rescuing humanity from the ethics-less dangers of technology could be the killer application for AI and machine learning?