Now, it's worth noting that the figure of leaked accounts continues to grow at a frankly alarming rate. Last week, the number was around 166 million compromised accounts on haveibeenpwned.
As of writing a few days later, we're now up to a grand total of 174,451,409 accounts. The figure includes the "mother of all data breaches" namely the Adobe one that saw over 152 million accounts leak out on the Internet.
That total number above is actually on the low side because haveibeenpwned.com does not include several millions of other accounts that weren't publicly released, from LinkedIn, Evernote, Kickstarter and Target to mention a few sites and organisations that have suffered data breaches.
Those accounts would add something like 182 million credentials to the tally. These are colossal numbers so I asked software architect and Microsoft valued professional (MVP) Troy Hunt in Australia who runs the ihavebeenpwned site how much space it all takes.
Troy said that the millions of records don't actually take up much space in modern data volume terms - only around 15 gigabytes.
This is stored uncompressed in Microsoft's cloud platform using Azure Storage and Troy's written about how he did it last year.
With that many credentials on one site and haveibeenpwned being rather fast in searching them, I asked Troy if there's a risk that the service could be abused by hackers to validate email addresses - and yes, it could he said.
"However, if I was "Mr Attacker", I'd go and just grab the original dump of data - they're all public - and enumerate through that locally rather than making all those slow, unnecessary HTTP requests," Troy said.
I also asked Troy if the colossal number of leaked credentials meant that having your email address as the username is a bad idea.
He disagreed and said that you still need an email address for password resets, meaning the two are rather intrinsically linked.
"The bigger challenge is credential reuse and weak passwords which goes to the previous point on password managers," Troy said.
That said, everything needs a logon these days and it's quite frankly unmanageable to make up long and secure ones for each and every site and hope to remember them.
What's the answer then?
Use a password manager. These will let you create really long and impossible to guess passwords, unique for each site.
A good password manager will also audit your existing set of passwords to ensure they're good and strong, and warn about old ones.
"The only secure password is the one you can't remember" as Troy puts it. Yep, as long as your password manager can though.
The Spark outage revisited
Right, I've had quite a number of people asking what actually happened during the massive Spark internet outage. Was it masses of frenzied hordes of Spark customers clicking on malware-linked stolen pics of J-Law or something else?
Read more:
• Spark users experience internet meltdown
• Juha Saarinen: XT time all over again?
Spark sent out a post-mortem message some days ago, but there were still a few unclear areas so I asked for clarification; here's summary of events chronologically that I hope is useful, based on Spark's official answers.
1) On Friday at 8pm unknown digital miscreants that had found open DNS resolvers on 138 modems belonging to Spark customers started a series of distributed denial of service (DDoS) attacks that lasted through to early Sunday morning that weekend.
According to Spark, the attacks came in waves, and changed in scale and nature over time. Actions were taken to prevent any risk of further exposure, Spark said - see below.
2) Spark's understanding is that the hackers started sending queries through the open resolvers for the purpose of DNS amplification attacks that create large amounts of traffic.
3) These queries were directed through the Spark Domain Name System servers, the ones that translate numeric addresses like 8.8.4.4 to some.internet.address.co.nz.
4) The queries appeared to come from a variety of locations around the world, some of which may have been forged by the attackers. However, the destination domains for the attacks were in Eastern Europe, Spark said.
5) As a large number of queries hit Spark's DNS servers, they became swamped while trying to resolve (translate) and respond to these.
6) To fix the problem, Spark blocked incoming traffic on port 53 UDP, as used for DNS queries.
7) Spark also disconnected some customer modems and blocked access to the open resolvers.
The fixes along with the hackers ceasing the attack meant that things went back to normal, eventually, on Sunday and Monday.
No malware has been found on customer devices as of yet, but Spark told me it hasn't ruled it out.
There's one big question left however: vulnerable broadband modems, old and new ones. What to do with them? Obviously, if these can be abused, another attack could take place.
In many cases, modem vendors will never issue security updates for modems, so would Spark consider perhaps a trade-in programme for these to ensure that customers and the provider's network remains secure?
"Considering best options at the moment," was the answer from Spark.