Twitter Content Rules Get Stronger, Not Smarter

911
911

Twitter is my most volatile guilty pleasure, at times entertaining me for hours and at others censoring harmless jokes from my feed for either no reason or a very poor one.
Automatically detecting malicious content and determining how it differs from content which is inappropriate outside of context is incredibly nuanced, and there’s no perfect way for Twitter to manage its punishment systems. However, I do think the company has the resources available to have active representatives who can monitor red flags, making sure that disputes are properly sorted.

Twitter’s track record with identifying and punishing abusive behavior has always been spotty, either hitting the nail on the head and suspending truly deserving accounts or punishing innocuous users for posting content with personal, non-aggressive context. An example of the former was the suspension of alt-right commentator and former UCI visitor Milo Yiannopoulos, who was permanently suspended after viciously harassing actress Leslie Jones. The latter happened most recently when actress Rose McGowan’s account was temporarily suspended for posting a personal phone number during her deconstruction of the Harvey Weinstein sexual harassment scandal.

Free speech has been the foundation on which Twitter has based most of its policies, complicating the issue of abuse on the website for many years. Walking the line between letting anyone say anything and protecting people from wanton harassment has been a tough task for Twitter, and it shows. The website has slowly tightened its content control criteria, automatically hiding tweets with “controversial language” (these words range anywhere from “stupid” to extreme expletives) and locking accounts until particularly offending tweets are deleted.
These pushes have been slow, painful, and messy, wrongfully locking accounts posting in-jokes as much as it rightfully shuts down deserving accounts. Throughout my years on Twitter I’ve seen both harmless and extremely dangerous accounts get shut down while serial harassers are allowed to keep their accounts with no repercussions.

The system is far from perfect, and I’m not sure it’ll get better anytime soon. Twitter CEO Jack Dorsey responded to the backlash of McGowan’s account suspension by promising to implement stronger content filters which will ban a wider range of threats, language, and imagery from appearing on Twitter. This would be a welcome announcement if the former content rules have been enforced well, which, as stated before, they have not.

I want to be excited to see regulations against hate symbols and the promotion of violence, but am worried that they will by sloppily thrown into the mix of what already exists and continue to perpetuate already existing problems.

The letter Dorsey’s team wrote detailing their plan of action against these new forms of abuse mentions that their product and operational teams will be more attentive and thorough when replying to appeals of suspension. This in itself is an exciting announcement, as it will hopefully equate to users better understanding what rules they have broken and how to avoid doing so again in the future. However, I would much rather see an acknowledgement that tweets can have offensive content that, when looked at through the context of the posters’ following, are purely sarcastic.

This is an incredibly difficult thing for an algorithm to analyze, and I would much rather see the operational team focus on decoding this context rather than tending to people who have already been struck down by Twitter. The best example of this I can think of is the recent suspension of YouTube personality “I Hate Everything” (IHE), who was locked out of his account for sarcastically tweeting that he would kill his brother’s entire family. The joke, which is admittedly shocking from when one does not know of their relationship, is purposefully extreme, implying that, if the tweet were to be followed through with, IHE would also have to kill himself. This tweet was not seen by a particularly wide audience and yet still was enough cause for Twitter to lock his account, preventing him from connecting with his audience or notifying people of his uploads during the duration of the suspension.

There is a possibility that the team’s increased activity will quickly stomp out any accounts that are accidentally hit with suspensions, but again, the time spent not tweeting can have a surprisingly large impact on the lives of content creators who are expected to be in relatively constant communication with their fans.

To hope that Twitter will ever have a perfect method of screening content is overly optimistic, but I think it’s reasonable to expect them to not strike their hammer wherever possible offensive content may arise. I hope they begin to consider more factors when considering the suspension of an account in the future, and hopefully not increase the character limit to 280. If I wanted to read full paragraphs I’d go to Reddit.

Isaac Espinosa is a third-year electrical engineering major. He can be reached at imespino@uci.edu.

In this article