Home How Twitter’s New Abuse Filter Could Backfire

How Twitter’s New Abuse Filter Could Backfire

Twitter’s ongoing efforts to counter online harassment are commendable. And its latest policy changes, which expand Twitter’s prohibition on violent threats to cover a much broader range of threatening behavior, are long overdue. 

But one possible change the company announced Monday—an abuse filter that users can’t opt out of—could have unintended consequences. Taking away the option for intended recipients of harassment or threats to see what’s coming their way also eliminates their ability to respond in a way they deem appropriate.

See also: Twitter CEO Admits Company Sucks At Dealing With Harassment

Twitter described this abuse filter as a “test” product feature intended to automatically flag tweets as potentially abusive and to limit their “reach.” It examines what Twitter calls a “wide range of signals and context” including the age of the tweeting account and similarities between a given tweet and others that its safety team has deemed abusive.

Those abusive tweets, however, won’t disappear entirely. They’ll still exist on Twitter, even though the intended recipients—and, perhaps, other users—will presumably be unable to see them unless they already follow the senders or search out their tweets. And unlike a “quality filter” that Twitter recently introduced for verified users, the abuse filter cannot be turned off.

Shielded From Harassment—Like It Or Not

Twitter’s goal here—to limit the spread of sociopathic threats and to tamp down harassment campaigns—is certainly a good one. To its credit, Twitter also notes that the test feature “does not take into account whether the content posted or followed by a user is controversial or unpopular.”

But there’s still a big problem with an abuse filter you can’t turn off or control: It can limit your ability to respond to legitimate threats. 

See also: Twitter’s Latest Anti-Harassment Measures Still Don’t Do The Trick

Just consider how Twitter’s other policy change, the long-overdue expansion of its violent threats policy, could be blunted by the abuse filter. The policy change expands the definition of violent threats from “direct, specific threats of violence against others” to “threats of violence against others or promot[ing] violence against others.” 

It’s a big step forward, given how often those harassed on Twitter have been told that the abusive tweets they flagged don’t violate Twitter rules. The new policy has some teeth, too, since Twitter has given its support team the power to temporarily lock abusive accounts. (Support personnel could already ask such users to delete specific tweets or to verify their phone numbers.)

Can’t Fight What You Can’t See

All that’s great, but it overlooks one important thing: It’s impossible to report threatening posts you never see. An abuse filter, for example, could allow a user to post violent threats which will be read by a horde of like-minded followers posts but not the recipient.

This does nothing to limit the post’s reach to the people most likely to act on it. Meanwhile, it decreases the likelihood that someone will report the tweet or the account that posted it.

Some users may prefer to be blissfully unaware of harassing tweets they can’t see. But others may want to see the content of abusive tweets to determine whether they need to take additional measures to protect themselves.  

In an op-ed for the Washington Post, Twitter general counsel Vijaya Gadde wrote that the company’s goal is to “welcoming diverse perspectives while protecting our users.” Unfortunately, shielding users from threatening speech while rendering them incapable of dealing with the consequences does little to protect them. 

It’s unclear whether Twitter has taken these considerations into account or will address them in some fashion. Here’s hoping it does.

Lead image by Celtikipooh

About ReadWrite’s Editorial Process

The ReadWrite Editorial policy involves closely monitoring the tech industry for major developments, new product launches, AI breakthroughs, video game releases and other newsworthy events. Editors assign relevant stories to staff writers or freelance contributors with expertise in each particular topic area. Before publication, articles go through a rigorous round of editing for accuracy, clarity, and to ensure adherence to ReadWrite's style guidelines.

Get the biggest tech headlines of the day delivered to your inbox

    By signing up, you agree to our Terms and Privacy Policy. Unsubscribe anytime.

    Tech News

    Explore the latest in tech with our Tech News. We cut through the noise for concise, relevant updates, keeping you informed about the rapidly evolving tech landscape with curated content that separates signal from noise.

    In-Depth Tech Stories

    Explore tech impact in In-Depth Stories. Narrative data journalism offers comprehensive analyses, revealing stories behind data. Understand industry trends for a deeper perspective on tech's intricate relationships with society.

    Expert Reviews

    Empower decisions with Expert Reviews, merging industry expertise and insightful analysis. Delve into tech intricacies, get the best deals, and stay ahead with our trustworthy guide to navigating the ever-changing tech market.