Who in their right mind would want to feed faulty AIs their likenesses. Photo / 123RF
COMMENT:
Enterprise employees reading this might be familiar with Citrix the digital workplace delivery provider.
A huge security hole that's easy to abuse was found in Citrix Application Delivery Controller (formerly known as Netscaler) just before Christmas last year, with tens of thousands of the systems being found on theinternet.
Proofs of concept that showed how to exploit the vulnerability were released on the internet and It didn't take long before attackers started scanning for and breaking in to Netscaler boxes.
What is taking longer is Citrix releasing patches for the serious vulnerability, but they're now getting a move on and there should be software updates ready by this weekend. By issuing specific commands, Netscaler admins can mitigate against the vulnerability, albeit not on all versions of the product.
Hackers are making the most of the slow patching though. One enterprising attacker appears to have assembled a special piece of malware that when it runs on a Netscaler box, scans for and deletes other malicious code infecting the system.
On top of that, the hacker applies the mitigation measures to stop other attackers from getting in.
"How nice of the person!" Not really: the malware comes with a cryptographically secured backdoor, meaning the hacker could return to compromised boxes, or sell access to them. Exclusive access even.
Hackers have patched systems like this before and it's a reminder that just because a box is patched doesn't mean it's secure.
Either way, Citrix admins who hadn't applied the mitigation measures before January 8 should consider their devices compromised and take them offline for further investigations.
Speaking of investigations, the New York Times flipped the Dystopi-o-Meter to 11 with their story about the Aussie Clearview startup, which does facial recognition on a massive scale.
By that I mean Peter Thiel-backed Clearview has "scraped" (means copied) billions of images from Twitter, Facebook, YouTube and other sites without anyone's permission. Not the subjects and not the sites themselves, which somehow failed to notice that Clearview was downloading gazillions of users' face pics.
Now Clearview's been sold to US police departments who seem to love the software even though they have no idea how it works or how accurate it is. Hoan Ton-That who rose to fame in 2009 for his Gmail credentials-snaffling Viddyho instant messaging worm has cobbled together a facial recognition system that he says is right three quarters of the time.
What could possibly go wrong, especially in states with the death penalty and armed "results-oriented" police who have no understanding of facial recognition being notoriously inaccurate with non-white-skinned people?
Last year we learnt that Chinese authorities are employing facial recognition, with police wearing shades with cameras; mobile phone and SIM card buyers there must have their faces scanned now too. Clearview shows perhaps that the West is keen to catch up with China.
Facial recognition is an aspect of increasingly pervasive surveillance that operates outside current privacy regulation, and it would seem, without any real ethical concerns.
The threshold to entry is low if you wish to start dabbling in facial recognition. A quick search finds heaps of open source projects, many with code that works well. There's even the Python-based ClearviewAI which is written by a developer in Malawi and probably not at all related to Hoan's Clearview.
Add limitless cheap cloud computing resources to that and help: we should be worried because there's no guarantee that the developers really know what they're doing, and facial recognition users to understand the technology either.
There are various creative ways to fight back. The brave pro-democracy demonstrators in Hong Kong use masks to hide their faces and laser pointers to blind cameras but that's kind of heavy handed.
Anti-facial recognition makeup is an old favourite and makes you look like a hip eighties clubster as an additional bonus. Getting a pair of anti-facial recognition glasses to wear that reflect and absorb light to confuse artificial intelligence trying to match your visage is probably more practical though.
Technology moves forward, and we might finally have found a beneficial use for "deepfakes" or AI-generated images of humans. Some I've seen look extremely realistic. We should perhaps start pumping social media sites full of pics of "people" that don't exist. Recognise that, AI!
On a more serious note, let's see if Facebook, Twitter, YouTube and the other scraped sites bite back against Clearview, ditto the privacy commissioners of this world.
Meanwhile, there is a silver lining to this: awareness of facial recognition abuse and danger might just kill selfies. I mean, who in their right mind would want to feed faulty AIs their likenesses, to be tracked on the quiet?