Project on moderation of terrorist content in social media

Social media now play a major role in digital communication. Whether between private individuals or between companies and groups on the one hand and their followers on the other. This also increasingly applies to the dissemination of terrorist content, especially live streaming. In the recent past, there has been an increase in live streaming of terrorist attacks on social media platforms. The sheer mass of content uploaded every minute poses massive problems for the operators of such platforms: in order to filter illegal content, whether of a terrorist or criminal nature or hate speech, it is not enough to leave the moderation of content to human controllers. Rather, platforms use tools to automatically review content.

Conflicting goals inevitably arise in such content moderation. Possibly the most obvious, but not the only, conflict of goals here, as in many other areas, is that between accuracy and speed: While the obvious goal is to filter prohibited content as quickly as possible, there is also an interest in capturing as much relevant content as possible while at the same time not blocking too much content that is not relevant (so-called overblocking).

These conflicting goals can be considered from both a technical and a legal perspective. In their research project, Ilka Pohland and Alessia Zornetta take just these perspectives and look for solutions to reconcile the different objectives and point out stumbling blocks in practical implementation.

A paper on the project has been published in the International Journal of Law and Information Technology.