First, there's no doubt that Hutchins did the world a huge favour when he registered the domain that acted as a "kill switch" when WannaCry checked for the existence of the name, and the malware found it, and stopped spreading. It didn't stop WannaCry in its tracks completely, but prevented plenty of infections that would otherwise have caused large amounts of damage.
That good deed will count for something when Hutchins stands trial; how much remains to be seen, and other researchers who have at times crossed poorly defined boundaries in their malware investigations must be sweating bullets now.
Usually, when a vulnerability in software or hardware is found, researchers devise a proof of concept. This shows the public and the vendors that their work holds water, but the code can be used by bad people as well, against unsecured systems.
The question then becomes: is a researcher who wrote a proof concept guilty of creating malware? And disseminating it, if the code has been made public?
That's a grey area, along with researchers hanging out in cybercriminal forums to track malware writers and glean intelligence on their activities. It's a balancing act: if they don't, they could miss the next malware scourge; if they do, they could be arrested at the next cybersecurity conference.
This is a problem that's been on security researchers' minds for a while now - and governments have been pondering it too without much progress.
Malware and software and hardware vulnerabilities are covered by the Cold War Wassenaar Arrangement, which deals with not just conventional arms, but also dual-use goods and tech that can be used offensively.
Security researchers worldwide are keen to share information about malware and vulnerabilities with each other so as to fix problems and to devise defences against attacks. Governments however go "not so fast" here.
That's because governments don't want the information to reach bad actors, criminals and terrorists.
Licenses should be acquired for "exporting" vulnerabilities, and if researchers breach WA-based provisions, they could be looking at severe penalties and jail time.
This is an area where almost everything is open to interpretation thanks to sweeping, and very harsh legislation that doesn't usually account for how technology works. That means if you are accused of cybercrime as part of your research, you need big lawyers costing a fortune or you could spend many many years in prison - money that only the very rich or big corporations have.
A potential prison sentence hanging over your head as part of your job is a massive disincentive to put it mildly. This is very much a bad thing in a world that's riddled with insecure IT, and which needs researchers to find the gaping holes in critical systems and fix them before all hell breaks loose, as happened with WannaCry.
More dialogue between authorities and researchers is needed as it's in nobody's interest to see people like Hutchins down tools for fear of going to prison. That will only benefit cybercriminals.