Twitter announced that it has “officially launched” a new Violent Speech policy that outlines its “zero-tolerance approach towards Violent Speech.” Its content is similar to Twitter’s previous violent threats policy, though it manages to be both more specific and more vague.
Twitter rewrites its rules on violent content under Elon Musk
Twitter rewrites its rules on violent content under Elon Musk
Both policies ban you from threatening or glorifying violence in most scenarios (each version has carve-outs for “hyperbolic” speech between friends). However, the new set of rules appears to expand on some concepts while cutting down on some others. For example, the old policy stated:
Statements that express a wish or hope that someone experiences physical harm, making vague or indirect threats, or threatening actions that are unlikely to cause serious or lasting injury are not actionable under this policy, but may be reviewed and actioned under those policies.
However, wishing someone harm is covered by the new policy, which reads:
You may not wish, hope, or express desire for harm. This includes (but is not limited to) hoping for others to die, suffer illnesses, tragic incidents, or experience other physically harmful consequences.
Except “new” is a bit of a misnomer here because pretty much that exact policy was expressed in the old abusive behavior rules — the only meaningful change is that it’s been moved and that Twitter’s stopped providing examples.
What does feel like a meaningful change is the new policy’s lack of explicitness in who it’s designed to protect. The old one made it clear right up front: “You may not threaten violence against an individual or a group of people.” (Emphasis mine.) The new policy doesn’t include the words “individual” or “group” and instead chooses to refer to “others.” While that could absolutely be interpreted as protecting marginalized groups, there isn’t anything specific that you can point to that actually proves that.
There are a few more changes worth highlighting: the new policy bans threats against “civilian homes and shelters, or infrastructure” and includes carve-outs for speech related to video games and sporting events, as well as “satire, or artistic expression when the context is expressing a viewpoint rather than instigating actionable violence or harm.”
The company also says that punishment — which usually comes in the form of an immediate, permanent suspension or an account lock that forces you to delete offending content — may be less severe if you’re acting out of “outrage” in a conversation “regarding certain individuals credibly accused of severe violence.” Twitter doesn’t provide an example of what exactly that would look like, but my understanding is that if you were to, say, call for a famous serial killer to be executed, you may not get a permanent ban for it.
Of course, my interpretation doesn’t matter all that much — the actual decisions will be made by whatever’s left of Twitter’s moderation team.
Once upon a time, before Musk actually owned Twitter and had to deal with keeping advertisers happy, he said that the platform “should match the laws of the country” and pitched the purchase as an attempt to save free speech. And while he’s continued to tweet about it, Twitter still doesn’t allow many things that would be are legally permitted. These updated rules are just the latest example of that.
I don’t mean that as a critique of Twitter, to be clear. A social network that actually based its moderation policies only on what’s legally permissible would be an utter hellscape that I, and I think most of the population, would have no interest in. I’m not a lawyer, but I don’t see anything about banning bots in the first amendment. (Perhaps that’s because it was written in the 1700s.)