Home Twitter’s Latest Anti-Harassment Measures Still Don’t Do The Trick

Twitter’s Latest Anti-Harassment Measures Still Don’t Do The Trick

Twitter has taken some additional steps to help users deal with online harassment, it announced Thursday. Too bad they still fall well short of what’s needed.

Give Twitter credit for trying, though. Its first change lets users report impersonation and disclosure of personal details—i.e., “doxing”—even if they aren’t the ones directly targeted. That’s important because bystanders may see improper posting of personal information before victims do, and because individuals subject to extensive harassment often find it easier to hand off the task of reporting abusers to a third party.

Twitter already began allowing bystanders to report harassment in December—now it’s added these new categories, along with reporting self-harm, to the mix.

See also: Twitter CEO Admits The Company Sucks At Dealing With Harassment

Twitter also announced that it’s adding “several new enforcement actions for use against accounts that violate our rules.”

And that’s the good news.

So Far, So Bad

But Twitter—whose CEO, Dick Costolo, recently admitted that the service sucks at dealing with abuse—still has a long way to go. For instance, it’s still possible for online trolls to evade sanction by deleting their tweets. As security researcher Jonathan Zdziarski writes: 

Twitter will not even allow you to file a report without providing links to the offensive tweets. In many cases, death threats are made and then deleted within hours, or even minutes. While Twitter retains all of their deleted content for law enforcement subpoenas, they pretend like tweets are actually deleted when talking to the public. No problem, right? We all make screenshots of these things when they happen, so I sent them into Twitter; very clear and complete screenshots of death threats coming from these little turds. The response from Twitter? “We don’t accept screenshots.” 

Although Twitter says it’s tripled the size of the support team which handles abuse reports, its neglect of deleted Tweets is a glaring oversight.

Also, it still falls to victims of impersonation to prove their identity by sending Twitter a government-issued ID, as well as proof of registration of a trade name or pseudonym if that’s what someone else is impersonating. That’s not always easy—especially when a business name or pseudonym isn’t registered.

As for those enforcement actions—well, the idea sounds good, but since Twitter didn’t disclose any of the specifics, it’s hard to know how effective they might be. Experts like Eva Galperin and Nadia Kayyali of the Electronic Frontier Foundation have called Twitter to task for both inconsistency and a lack of transparency:

In truth, Twitter’s abuse policies are open to interpretation. And the way Twitter does interpret them can be difficult for outsiders to understand—which perhaps explains complaints from users that they seem to be enforced inconsistently. Users have argued that abuse reports often disappear without response, or take months to resolve. On the other hand, it appears that sometimes tweets are removed or accounts are suspended without any explanation to the user of the applicable policy or reference to the offending content.

Users who have been warned or temporarily banned for abuse may be asked to verify contact information, Ars Technica reports. But this wouldn’t stop them from simply creating a new account and resuming the same behavior. 

What A Better Twitter Might Look Like

Mobile interaction designer and developer Danilo Campos offered some solutions to these problems in a blog post last July. For example, Campos suggests letting users block all newly created accounts—less than 30 days old, say, or those with low follower counts. Ideally, such autoblocks would be opaque to the affected user, who could keep happily tweeting into a sudden vacuum.

Such measures, combined with a way to deal with deleted harassing tweets, could help curb the abuse problem that Twitter has suddenly decided to stop ignoring.

Lead image by MKH Marketing

About ReadWrite’s Editorial Process

The ReadWrite Editorial policy involves closely monitoring the tech industry for major developments, new product launches, AI breakthroughs, video game releases and other newsworthy events. Editors assign relevant stories to staff writers or freelance contributors with expertise in each particular topic area. Before publication, articles go through a rigorous round of editing for accuracy, clarity, and to ensure adherence to ReadWrite's style guidelines.

Get the biggest tech headlines of the day delivered to your inbox

    By signing up, you agree to our Terms and Privacy Policy. Unsubscribe anytime.

    Tech News

    Explore the latest in tech with our Tech News. We cut through the noise for concise, relevant updates, keeping you informed about the rapidly evolving tech landscape with curated content that separates signal from noise.

    In-Depth Tech Stories

    Explore tech impact in In-Depth Stories. Narrative data journalism offers comprehensive analyses, revealing stories behind data. Understand industry trends for a deeper perspective on tech's intricate relationships with society.

    Expert Reviews

    Empower decisions with Expert Reviews, merging industry expertise and insightful analysis. Delve into tech intricacies, get the best deals, and stay ahead with our trustworthy guide to navigating the ever-changing tech market.