European regulators are giving Microsoft, Facebook, Twitter, and YouTube six months to voluntarily get more aggressive about blocking and removing hate speech and terrorist-related content or face possible new regulations next year.
Today, the European Union issued new guidelines, saying it “invites online platforms to step up their efforts to remove illegal content online.” Those four companies had signed a Code of Conduct back in 2016, pledging to combat such content.
But European officials feel that while some progress has been made, it hasn’t gone as far or as fast as they’d hoped. And so with the new guidelines, the EU is also giving companies a deadline of next May, at which time there will be a review of progress and consideration as to whether new legislation is needed to force the companies to comply.
“We are providing a sound EU answer to the challenge of illegal content online,” said Andrus Ansip, vice president for the Digital Single Market, in a statement. “We make it easier for platforms to fulfill their duty, in close cooperation with law enforcement and civil society. Our guidance includes safeguards to avoid over-removal and ensure transparency and the protection of fundamental rights such as freedom of speech.”
The three new “tools” cover three categories:
- Detection and notification: The EU wants companies to designate a point of contact so anyone can quickly report illegal content. It also wants these companies to forge more relationships with third parties that monitor such content to create “trusted flaggers.” The companies should also develop more “automatic detection technologies.”
- Effective removal: The removal process should be more efficient and include firm deadlines.
- Prevention of re-appearance: The EU wants the companies to develop tools to deter other users from re-uploading the same illegal content.
The EU did note that since the Code of Conduct was signed, “removals of illegal hate speech have increased from 28 percent to 59 percent.” However, there was concern that in 28 percent of these cases it took a week before the content was removed.
The EU regulators also reported that “approximately 80-90 percent of content flagged by Europol has been removed since its inception. In the context of child sexual abuse material, the INHOPE system of hotlines reported already in 2015 removal efficiencies of 91 percent within 72 hours, with 1 out of 3 content items being removed within 24 hours.”
Regulators said this was sufficient progress for them to continue with a model based on voluntary cooperation. That cooperative mode will be reviewed again in six months.
“The digital world offers unprecedented opportunities but, in the wrong hands, poses a serious threat to our security,” said Julian King, Commissioner for the Security Union, in a statement. “Internet companies have a central role in eliminating online terrorist material by stepping up their efforts and showing corporate social responsibility for the digital age.”