Twitter will soon prompt users when they reply to a tweet using “offensive or hurtful language,” to revise their tweets before they are posted, the company tweeted.
When users hit “send” on their reply, they will be told if the words in their tweet are similar to those in posts that have been reported, and asked if they would like to revise it or not.
For long, Twitter has been under immense pressure to clean up hateful and abusive content on its platform, which are policed by users flagging rule-breaking tweets and by technology among other means because Twitter’s policies do not allow users to target individuals with slurs, racist or sexist tropes, or degrading content.
Recently, the company took action against almost 396,000 accounts under its abuse policies and more than 584,000 accounts under its hateful conduct policies between January and June of last year, according to its transparency report.
That said, could this new test be the first step into new ways of curbing the more extreme forms of content on the platform?