If the blocked person tries to access your profile, they will see a message saying they have been auto-blocked by Twitter, not you.
Twitter product manager Jarrod Doherty said in a post, "Our technology takes existing relationships into account, so accounts you follow or frequently interact with will not be autoblocked ... [But] we won't always get this right and may make mistakes, so Safety Mode autoblocks can be seen and undone at any time in your Settings."
Safety Mode rolls out out from today. You can enable it under Settings (see how here). English must be enabled as your language.
If you're having issues with trolls - and who isn't these days as social media gets more short-tempered than ever under pandemic pressure - Twitter also recommends you check out the new-ish feature allowing you to restrict who can reply to a tweet, plus the long-standing option to enter into "protected" mode - which lets you accept or decline followers, and only followers can see your tweets.
There are also options to make yourself less discoverable to online harrassers by disabling location and photo-tagging.
We'll have to see how effective Twitter's new tools prove - but there's no doubt we need more help.
Research by Netsafe - the lead agency for the Harmful Digital Communications Act - found that one in five NZ adults and twice as many young people received a digital communication that negatively impacted their life in 2020.
And the introduction to the draft of Netsafe's new safety code, released on December 3 said, "As 2021 has progressed, Netsafe is continuing to record a new 'high' in the number of reports related to harmful digital communication. Experiences like this, directly and indirectly, can cause physical, financial, and psychological harm; decrease user confidence; and undermine investment in the digital economy and society."
Advocacy groups canvassed by the Herald were glad a safety code was underway.
But they found the voluntary code's provisions too vague and watery, and noted it did not include specific timelines for responses, or any sanctions for social media companies who did not follow its guidelines.
They also questioned Facebook, Twitter, TikTok and other platforms' involvement in the creation of the draft code.
"It looks to me like a 'tick box' code rather than one with real potential to bring about the change needed to create an internet where everyone feels safe and welcomed," Tohatoha chief executive Mandy Henk told the Herald.
Tohatoha advocates for a more equitable internet, and works on initiatives to curb hate speech and misinformation online.
InternetNZ public policy manager Andrew Cushen said while the draft had gone to public consultation, submitters could only tweak a code that the industry had had a strong hand in creating. Instead, affected community groups should have been involved from the ground-up, Cushen said.
The administrator for the new code - which is not finalised but likely to be Netsafe - will be part-funded by the social media platforms.
"I have concerns that if Netsafe were to be the administrator of the code, it could create a conflict with their role as the official conflict and resolution body for the Harmful Digital Communications Act," Henk said.
Netsafe CEO Martin Cocker - who resigned as the draft was released - said the code was a work in progress. Consultation workshops will follow the public submission process.
Earlier, Meta Australia-New Zealand policy director Mia Garlick said Facebook encouraged governments and government agencies like Netsafe to set online safety policy. The social network welcomed clarification of the ground rules in each jurisdiction in operated.