French NGOs took Twitter to court in Paris Monday morning, accusing the giant of social media of not doing enough to tackle hate speech online.
Four French NGOs – SOS Racism, SOS Homophobia, the Union of Jewish Students of France and J’accuse – sued Twitter on 11 May. A Paris court began hearing the case Monday before adjourning further hearings until Dec. 1; Twitter and the NGOs agree to participate in mediation before the next session.
At the heart of the matter is Twitter’s refusal to provide information about its moderation processes. The four NGOs have filed to obtain this data.
Social networking platforms are required by the new French Avia Law on Combating Hate Speech Online (May 2020) to publish how they restrict the dissemination of such content and how they respond to reports. For example, they need to reveal the number of moderators where they work and the education they have received. Twitter does not share this information.
Social media networks have come under renewed fire in France in recent days after the teacher’s beheading was announced on Twitter. A photograph of the teacher’s body, accompanied by a message claiming responsibility, was posted on the social network. It was also discovered on the attacker’s phone, found near his body. France’s accuser of terrorism, Jean-François Ricard, confirmed on Saturday that the Twitter account belonged to the attacker.
The post was quickly removed by Twitter, which also said it had suspended the account for violating company policies.
Twitter’s low removal policy
In the European Commission’s Fifth Code of Conduct for Combating Hate Speech Online, published in June 2020, Twitter came to the bottom of the league when it came to removing hate speech. The review covered a period of six weeks at the end of 2019. Facebook removed 87.6 percent of the content, YouTube removed 79.7 percent, but Twitter only took down 35.9 percent.
So is Twitter the worst perpetrator when it comes to content moderation? “Yes, if you only consider the top four (Instagram, Twitter, Facebook and YouTube) – but there are hundreds of other social media platforms,” said lawyer Philippe Coen, who founded the NGO Respect Zone to target cyber-violence. “Twitter has actually made a big effort to improve its moderation in recent months. It just needs to do a lot more. ”
“Interestingly enough, CEOs of all major social media platforms are asking for more defined rules regarding hate content. They cannot act without the supporting legislation. And there are many ways to combat cyberbullying, except just in court. You need to start with schools and companies and communities. We do not want to work against the social media platforms we want to work with. ”
Twitter declined to comment on the case.
Is Twitter responsible for bullying?
As cyber-violence has increased exponentially in recent years, there has also been a move to increase the host providers’ obligation to moderate content. However, it is still not clearly regulated.
Social networks are not currently legally responsible for their content. They have the legal status of host, which limits their legal responsibility for content published on their network. They are only required to delete content after a report has been made and if it is clearly against the law.
The question in the courts is whether Twitter has neglected its legal responsibility to moderate content.
“The term negligence legally refers to an error of care, a breach of duty of care or a lack of care,” said French information technology and data protection lawyer Olivia Luzi, who spoke to FRANCE 24. “Given the legal obligations currently imposed on platforms and in fact the enormous task of monitoring all content at the exact moment it appears online rather than from the moment it is reported, it is difficult to qualify what constitutes negligence. ”
“Twitter currently has reporting and removal measures in place that are within the European Commission’s recommendations. They must review most reports within 24 hours and, if necessary, block access to them, ”explains Luzi.
“This case against Twitter will affect all hosting providers and therefore all social media, especially online magazines and their comment sections,” says Luzi. “They can no longer systematically hide behind the great and beautiful principles of freedom of expression to tolerate social media tools being hijacked from their purpose and used as a vector of hatred. It is up to these organizations to take initiatives to moderate without necessarily being accused of censorship and to work together to build a digital world that reflects the values they advocate. ”
Definition of hate speech
In September, the World Federation of Advertisingers announced that they had reached an agreement with Facebook, Twitter and YouTube. For the first time, they agreed on common definitions of content such as hate speech and aggression, established harmonized reporting standards across platforms and authorized external auditors to oversee the system, which will launch in the second half of 2021.
In July, an independent audit conducted by Facebook even accused the social network of not tackling hate speech and fake news. Auditors who included the Anti-Defamation League condemned it for putting freedom of speech above everything else.
One week ago, Facebook explicitly banned Holocaust denial.
The social network said its new policy bans “any content that denies or distorts the Holocaust”. Facebook CEO Mark Zuckerberg wrote that he had “struggled with the tension” between free speech and banning such posts, but that “this is the right balance”.
“This step from Facebook is a revolution, it surprised everyone,” Coen said. “However, it is a long, long battle for all the social media platforms, and we are only at the beginning of it. We are now working to try to convince digital companies to include the ideas of human dignity and respect in their digital design, which the architects of these platforms have completely forgotten. These sites are designed to capture your money and your data, but not your decency. ”