Twitter’s ongoing efforts to counter online harassment are commendable. And its latest policy changes, which expand Twitter’s prohibition on violent threats to cover a much broader range of threatening behavior, are long overdue.
But one possible change the company announced Monday—an abuse filter that users can’t opt out of—could have unintended consequences. Taking away the option for intended recipients of harassment or threats to see what’s coming their way also eliminates their ability to respond in a way they deem appropriate.
Twitter described this abuse filter as a “test” product feature intended to automatically flag tweets as potentially abusive and to limit their “reach.” It examines what Twitter calls a “wide range of signals and context” including the age of the tweeting account and similarities between a given tweet and others that its safety team has deemed abusive.
Those abusive tweets, however, won’t disappear entirely. They’ll still exist on Twitter, even though the intended recipients—and, perhaps, other users—will presumably be unable to see them unless they already follow the senders or search out their tweets. And unlike a “quality filter” that Twitter recently introduced for verified users, the abuse filter cannot be turned off.
Shielded From Harassment—Like It Or Not
Twitter’s goal here—to limit the spread of sociopathic threats and to tamp down harassment campaigns—is certainly a good one. To its credit, Twitter also notes that the test feature “does not take into account whether the content posted or followed by a user is controversial or unpopular.”
But there’s still a big problem with an abuse filter you can’t turn off or control: It can limit your ability to respond to legitimate threats.
Just consider how Twitter’s other policy change, the long-overdue expansion of its violent threats policy, could be blunted by the abuse filter. The policy change expands the definition of violent threats from “direct, specific threats of violence against others” to “threats of violence against others or promot[ing] violence against others.”
It’s a big step forward, given how often those harassed on Twitter have been told that the abusive tweets they flagged don’t violate Twitter rules. The new policy has some teeth, too, since Twitter has given its support team the power to temporarily lock abusive accounts. (Support personnel could already ask such users to delete specific tweets or to verify their phone numbers.)
Can’t Fight What You Can’t See
All that’s great, but it overlooks one important thing: It’s impossible to report threatening posts you never see. An abuse filter, for example, could allow a user to post violent threats which will be read by a horde of like-minded followers posts but not the recipient.
This does nothing to limit the post’s reach to the people most likely to act on it. Meanwhile, it decreases the likelihood that someone will report the tweet or the account that posted it.
Some users may prefer to be blissfully unaware of harassing tweets they can’t see. But others may want to see the content of abusive tweets to determine whether they need to take additional measures to protect themselves.
In an op-ed for the Washington Post, Twitter general counsel Vijaya Gadde wrote that the company’s goal is to “welcoming diverse perspectives while protecting our users.” Unfortunately, shielding users from threatening speech while rendering them incapable of dealing with the consequences does little to protect them.
It’s unclear whether Twitter has taken these considerations into account or will address them in some fashion. Here’s hoping it does.
Lead image by Celtikipooh