“A better regulation of social networks” after the assassination of Samuel Paty?
After the atrocious assassination of our colleague Samuel Paty, the time for tributes quickly gave way to political counter-offensives. For the spokesman of the Government, the people in charge were easy to find : “Things started on social networks with videos, especially of this student’s parent and ended on social networks with this abject photo that was posted by the terrorist”. He added that “yes, they have a responsibility, yes, we must manage to better supervise them”.
The parliamentary majority had sought to reform the issue this year. The fact that this text was almost entirely censored by the Constitutional Council did not make the Prime Minister humble, as one might have expected. This aborted attempt, suddenly reported as a feat of arms, was widely applauded in the Hemicycle, under the greedy gaze of its architect Laetitia Avia. Mr. Castex announced that the text would soon return “in another form”.
An uninformed observer at the scene would probably take note: that the Constitutional Council has obstructed the first law for reasons so futile that they hardly deserve attention; that the Government knows perfectly well how to regulate social networks and is preparing to demonstrate it; that in doing so, it will effectively prevent future terrorist attacks.
Unfortunately, none of these assertions seem to us to be true.
Before returning to the Samuel Paty massacre, it is necessary to briefly recall what rules are currently in force regarding social networks.
What content should social networks currently remove?
The rules that govern electronic communications in general, and the withdrawal obligations on social networks in particular, are largely presented on this site. We will therefore only summarize them in very broad strokes. The pre-existing communication model on the Internet was pyramidal: a small number of information producers addressed a multitude of readers, listeners, viewers. For each book, each newspaper published, each broadcast, it was possible to identify one or more people who had read the content in its entirety before, and who could be held accountable, civilly and criminally, for the possible illegality of the content.
Such a model is obviously not transposable to social networks, which carry millions of messages, “likes” or shares per minute. Mark Zuckerberg is not the editor-in-chief or managing editor of Facebook. Neither he nor even all of his teams have the opportunity to have a personal and a priori knowledge of each of the publications passing through their tool. There was another legal qualification available: that of technical intermediary, which initially targets actors such as website hosts. But a social network is not quite that either. It is more than a silly pipe through which contents would transit. It agencies them, articulates them, increases or decreases their visibility, proposes to propagate them, to enrich them, to answer them… by automated means. But until a very delicate “third way” has been devised between the regime of publishers and that of simple technical intermediaries, it is the latter that applies to social networks.
What are the obligations of this status regarding the removal of illegal content? They can be summarized in three points:
- the social network has the obligation to act only if it has personal knowledge of the content (which is rare) or if the content has been reported to it ;
- it then verifies if the content is “obviously illicit”;
- if the answer is positive, it must “promptly” remove it.
It is important to note that the 2004 law, which established this solution, originally targeted “illegal” content. It was a decision of the Constitutional Council more than fifteen years ago that laid down this solution, which is absolutely fundamental:
(…) these provisions cannot have the effect of engaging the responsibility of a host that has not withdrawn an information denounced as illicit by a third party if it does not manifestly present such a character or if its withdrawal has not been ordered by a judge (…)
Either it is absolutely flagrant that the content is contrary to French law, and the platform must react. Either the illegality is questionable, it is debatable, and the platform must leave the content in place pending the possible order of a judge. In principle, even a slight doubt benefits the content. Why is this? Because the Constitutional Council refuses to delegate to Twitter, Instagram or Tiktok the power to decide what one has the right to say, write or show in the French public debate. This prerogative, in a democratic society, can only belong to magistrates.
These prerequisites having been recalled, let us return to the terrible Samuel Paty affair.
The first videos criticizing the teacher
I have not been personally acquainted with these media, so I will propose an analysis with regard to the characteristics reported by the journalist Marc Rees:
In the famous video, a parent of the Collège du Bois d’Aulne explained that his daughter had been shocked by the “behavior” of the teacher, described as “the thug of history”, followed by his description of the facts: the class broadcast of caricatures of the prophet, the request expressed to Muslim students to leave the room so as not to be shocked. He concluded his testimony, broadcast on Facebook, by asking all those who “do not agree” to send him a message on a 06, considering that “this thug should no longer remain in the National Education, should no longer educate children, he must go educate himself”.
Another video by Abdelhakim Sefrioui, who presents himself as a member of the Council of Imams of France, goes so far as to evoke the humiliation of Muslim students. In it, he recounts his “disagreement” and even his “amazement” that the administration was informed of these facts, while tolerating them.
He indicates that the CIF and the Muslims in France “categorically refuse this kind of irresponsible and aggressive behavior and do not respect the right of these children to maintain their psychological integrity”. He repeats that the teacher is above all a “thug”, before making known his willingness to “mobilize for action before the establishment and before the academic inspection” (…).
Mr. Rees considers that these contents are not of a manifestly illicit nature. His opinion is not isolated. I share this view. It is by a retrospective judgment, because he knows what hideous fire was started by these sparks, that the reader can imagine that these messages obviously called for censorship. Considered coldly, at the moment they are produced, they constitute virulent, unpleasant, embarrassing criticisms of a teacher’s action. They may constitute minor offences such as defamation or insult, but no more than that. They certainly do not constitute an open invitation to murder.
These messages, however, lose their innocuous character if they are interpreted in the light of the social context in which they take place. This “separatism”, to use the word of the President of the Republic. A fraction of society overflowing with resentment, in rupture with common values. The sparks in question were released in a field dried up by anger and whose flare-up, if not certain, was at least likely.
In order to make the right decision, therefore, it was necessary to weigh the absence of an open call for violence in these publications against the existence of extremely dangerous groups of individuals who could seize such an opportunity to let their thirst for blood speak for itself. Who has the training, the political and constitutional legitimacy to carry out such a delicate weighing? Who can decide, if necessary, that freedom of expression must bow to the risks to the life of a public official whose name is thrown to the crowd as if it were a pasture? It is the judge, and him alone.
To judge quickly but to judge well?
There are only bad reasons to abandon such a decision to Facebook, an American commercial company bathed in the culture of free speech, probably terrified at the idea of being accused of Islamophobia by removing such content. First bad reason: money. It costs less to hire Facebook moderators than it does to hire judges. For the remedy, contact a tax expert. Second bad reason: speed. French justice, in a state of historical impoverishment, is already having a hard time judging civil cases in less than two years: we can’t imagine it reacting to the speed of social networks, which is that of lightning.
However, in theory, there is no obstacle to the setting up of a shock team of judges from the bench, in front of their computers, with a headset, to carry out “real-time processing” as the prosecutors already do when they direct police and gendarmerie investigations. A judge is able to rule alone, in a few minutes. He could be solicited by the platform, confronted with content that is not obviously illegal but whose potential danger it detects, by specialized police or gendarmerie services, or by intelligence services. In this respect, it is hard to imagine French intelligence services calling Twitter or Pinterest’s temporary staff to reveal confidential elements resulting from their surveillance that would justify special treatment for a publication. It is more conceivable to call a magistrate. By “special treatment”, we mean both suggesting a withdrawal that seemed unnecessary to protect a target known to be in danger, and suggesting the maintenance of content when the judge thought it should be censored. Indeed, it happens that our intelligence services draw from the life of an online publication, from its various shares, from its circulation, valuable information on the groups of individuals they monitor.
Calls for murder, dissemination of images of assassination, apology for terrorism
If the two videos mentioned so far probably escape the qualification of “manifestly illicit contents”, it is not the same for other contents produced thereafter. Before Samuel Paty’s attack, genuine calls for murder may have been made. After his death, images of his corpse were found to circulate. Unfortunately, as always after an attack, messages of support for the terrorist’s action were broadcast.
The illicit nature of these publications is not in doubt this time. It follows that the platforms to which they are reported must promptly remove them. This requires two things. First, that they allocate sufficient resources to moderation, and the State can legitimately harden its tone when this is not the case. But second, they must not find themselves drowned out by trivial alerts: even in the paradigm promoted by the Avia law, there is no real hierarchy between the sources of alert. Any Internet user who alerts the platform about perfectly harmless content, through lack of lucidity or malevolence, pulls the platform’s sleeve, forces it to perform an examination (sometimes partially automated, of course) and wastes its time. In this model, the Constitutional Council pointed out that :
the legislator has forced online platform operators to fulfill their obligation to withdraw within twenty-four hours. However, in view of the above-mentioned difficulties in assessing the manifest unlawfulness of the reported content and the risk of numerous, possibly unfounded reports, such a deadline is particularly short.
Twenty-four hours are therefore sometimes too short to empty the ocean of reports, even with a large spoon. At the same time, this period is far too long in view of the extremely serious social disorder caused by images of decapitation, calls for murder or apology for terrorism.
The solution lies in prioritizing reports. This is precisely the project for a European regulation on online terrorist content: each Member State will designate an administrative or judicial authority, which will have the power to report terrorist content to platforms. These reports will then have to be withdrawn in less than an hour.
Compared to such a mechanism, the Avia law as dreamed by the Government is clearly of no interest in the case at hand.
With regard to these contents congratulating the terrorist or displaying the macabre remains of Samuel Paty, we wish to firmly emphasize that the platform approach cannot be sufficient. Removing the contents is useful to stop their propagation, but does not erase the crime committed by the authors of these messages. They must be identified and prosecuted. There is no doubt that, in such a terrifying case, the appropriate means will be implemented – which will demonstrate, moreover, that the use of pseudonyms has not protected the authors of the publications. On a daily basis, however, online hatred is not prosecuted, which fuels a dangerous feeling of “online impunity” that owes nothing to an alleged “legal vacuum” – it does not exist – but to a chronic lack of means devoted to this issue.
It should also be pointed out that Internet users who have merely participated in the viral propagation of these contents can also be held civilly and criminally liable. Since an old popularization article on “the legal consequences of retweeting“, I have proposed on this site a more detailed vision of this issue. For example, a criminal incrimination is aimed at the recording, but also the “diffusion” of violent images. The simple material act of sharing on social networks falls without difficulty under this qualification, regardless of the accompanying text. As for the criminalization of apology for terrorism, if it obviously affects the authors of messages of support for the terrorist, it could also concern Internet users who pass on these messages without accompanying them from a critical distance, their silence could possibly be considered by the judge as support, depending on the circumstances.
The risk of switching to encrypted channels
An important caveat must be formulated: all the reasoning built up to date is likely to be undermined by the use of “opaque” communication tools. As open social networks become better regulated, there is a risk that the circulation of certain content will be organized through other channels. End-to-end encrypted messaging makes it impossible for the platform and public authorities to access the texts and videos that transit through their channels.
It is true that authors of publications that intend to reach a large audience of strangers need, at some point, to leave these private loops in order to ensure “viral” dissemination. Serious moderation of the platforms is therefore likely to prevent, for example, attempts to massively influence national elections through the dissemination of misleading news. On the other hand, in the case at hand, the most radical fundamentalists can get to know each other beforehand and set up an encrypted circulation of their propaganda or calls for murder. Restricting the size of “Whatsapp groups” is not an obstacle for well-organized groups: if the circles are composed of a maximum of 5 users, if they are well arranged in a pyramidal form, it will result in an exponential diffusion – the mechanisms of which are well known in this period of pandemic.
For all that, it is not at all desirable to radically prohibit encrypted communications, as we are trying to explain here.
What proposals for improvement?
The reader probably reaches the foot of this article without having understood which simple and clear measure is likely to avoid the next Samuel Paty affair. The explanation is simple: such a measure does not exist.
A few leads have been suggested. A “real time” processing unit for non manifestly illicit contents could be set up, composed of judicial judges. Once the European regulation for fighting online terrorist content has been finalized, it could also designate this cell as the body capable of ordering the removal of content in less than an hour. It would be necessary to ensure that this task force is supplied with content, which implies multiplying the sentries on the ground: platform moderation teams, specialized police and gendarmerie services, and general services, possibly civil society actors recognized for their reliability in this area.
Without such a measure, the proposal of the Prime Minister’s Office to penalize the dissemination of personal data without the consent of the person concerned is unlikely to change anything. Not only will it be necessary to see what this incrimination adds to the existing one, but the enactment of such an incrimination will probably not create any new obligation for the platforms to act. Indeed, it will be rare for a publication to prove to be “manifestly unlawful” simply because it contains personal data and adopts a critical tone – otherwise this article would fall under the criminalization as having damaged the consideration of Jean Castex or Laetitia Avia. A serious examination of the facts will be necessary and it should be conducted … by a magistrate, again.
Even if the few suggestions presented here were to be followed, let us be perfectly clear that we should not expect too much. If it is indisputable that social networks amplify or aggravate certain evils of society, they do not do more. If some Internet users, in the midst of the pandemic, massively relayed a video presenting the Sars-COV-2 as having been deliberately manufactured by the Pasteur Institute, it is above all the result of an unheard-of mistrust of “elites”, a worrying lack of critical thinking and an inability to handle quality sources of information. As for the “ordinary” attacks on teachers (insults, blows), which are legion, they can certainly be coordinated by digital means, but they are above all the hideous offspring of a society that regularly spits on its teachers and for which verbal and physical violence is a means of interaction like any other. As for the abominable murder of our colleague Samuel Paty, which is of a completely different nature, it is the result of such a break with the very foundations of all life in society that we leave it to specialists in religious fanaticism to explain mental springs that are completely inaccessible to us. Social networks hold out a mirror to us in which we discover the dreadful face of our nation today. We can be content to break this mirror. Do we want to?
To resolve such profound difficulties, we must have the courage to say that there is no miracle cure. No measure taken the very next day, and which surprisingly had not been thought of until now, will be able to make all these demons disappear. Summoning the big Internet companies the day after the fact with drum rolls and pretending to slap their fingers once more can give the political leader the illusion, for a brief moment, that he is in control of something. Pushing one’s fist on the table, in front of the cameras, to demand that a “secularism” be introduced into the Constitution that has been there since day one certainly makes the misinformed voter shudder with pleasure. One can go so far as to discuss the layout of the hallal or kosher shelves of supermarkets, feeding the amalgam between ordinary religious practice and terrorist delirium. All this solves nothing. Let’s pay tribute to Cédric O all the more sincerely as we do not share any of his past positions on subjects such as facial recognition on the public space: he has proposed one of the only contributions of political personnel that is measured, distanced and subtle.
A work of several decades, of unprecedented complexity, is emerging in an attempt to pick up the pieces of a broken republican pact. It does not rule out shorter-term measures, some in the area of social networks, others to reinforce the intelligence services and to protect the Republic’s professors.
Our colleague Samuel Paty had the courage to face, in his classroom, all the complexity of the world. He was killed by the partisans of a binary, simplistic and brutal vision of society. Which side do those who equate buying hallal food with terrorism? Those who have nothing better to say could keep quiet.
The time of tributes must be followed by the time of mourning. The time of mourning must be followed by the time of reflection.
Emmanuel Netter, professor of private law at the University of Avignon
You can react in the margins of this text as explained here.
The reflections initiated in this article have been continued here, following Donald Trump’s expulsion from various social networks.