Twitter on Tuesday suspended the account of Richard Spencer, one of the leading voices of the alt-right movement, amid a wider crackdown on hate speech and cyberbullying. It also expanded the use of several existing tools and changed policies to make it easier for users to fight back against abuse and harassment.
Twitter suspended the accounts of Spencer, president of the National Policy Institute, a white nationalist think tank. It also suspended the account of the organization, as well as the accounts of its online magazine and Washington Summit Publishers, which is Spencer’s book publishing firm.
Spencer referred to the move as “corporate Stalinism.”
The suspensions and larger crackdown come amid an increase in reported hate crimes since the presidential election, which was praised by a number of alt-right groups while thousands of protesters took to the streets in cities across the country.
“The Twitter rules prohibit violent threats, harassment, hateful conduct and multiple account abuse, and we will take actions on accounts violating those policies,” a Twitter spokesperson told TechNewsWorld.
Twitter’s mute button, which allows users to block certain accounts, has been enhanced to allow users to block keywords, phrases and entire conversations they consider offensive, similar to the Unfollow feature on Facebook.
Who Ya Gonna Call?
The change comes months after the widely reported onslaught against Ghostbusters star and Saturday Night Live cast member Leslie Jones. After being bombarded by racist and sexist attacks, she temporarily unplugged her own Twitter account.
“Twitter provides an opportunity for users to have more control of their experience on the site,” said Zack Fuller, paid content analyst at Midia Research. “I expect the feature to be welcomed by the Twitter community.”
The mute function for words and hashtags applies only to user notification, and the tweets will still show up in the timeline and via search, Twitter noted.
Twitter’s hateful conduct policy already bans conduct that targets users based on race, ethnicity, national origin, sexual orientation, gender, gender identity, religion, age, disability or disease. The new tools give users a more direct ability to report that type of abuse going forward.
Twitter has retrained all of its support teams on the cultural and historical context of hateful conduct, and it has implemented a refresher program.
The company had received feedback from users that “people didn’t always know where or if they could report hateful conduct, especially if not targeted at them personally,” Twitter spokesperson Brielle Villablanca told TechNewsWorld.
Policing hate speech on a social network can be difficult, because it begins to creep into activity that is considered censorship, noted Jim McGregor, principal analyst at Tirias Research.
“Eventually, artificial intelligence will make it easier,” he told TechNewsWorld, “but that is going to be further into the future — and even then, nothing is perfect, because AI needs to learn and adapt just like a human.”
Despite those concerns, cracking down on hate speech and abusive behavior will be important for social networks to avoid driving users away altogether.
Cyberbullying has been an ongoing problem on Twitter, Facebook and other social media platforms, and it reportedly was a factor in Twitter’s inability to find a buyer last month, when companies such as Salesforce, Disney and others considered making a bid for the company.