For security authorities, automated monitoring of social media is gaining increasing importance.
For security authorities, automated monitoring of social media is gaining increasing importance. | Photo: Dollar Gill on Unsplash

Early Warning? Opportunities and Limitations of Automated Internet Monitoring

Policymakers have invested considerable effort and research funding to understand the role of the Internet in radicalisation processes and attack planning. This includes approaches to identify radicalisation or “weak signals” for terrorist intentions in online behaviour. As a result, security authorities have become increasingly interested in approaches to computer science including Artificial Intelligence. Nevertheless, what results have research efforts thus far yielded? Can computer science prove useful? And what are the possibilities and limitations of automated tools?

Previous research on radicalisation concludes that online interactions can play a role in violent radicalisation. However, Paul Gill et al. assert that “there is no easy offline versus online violent radicalisation dichotomy to be drawn.” Rather, engagement in online networks and nonvirtual interactions go hand in hand. As violent radicalisation often involves interactions in both domains – online and offline – it seems reasonable to consider online written communication and behaviour in individual risk assessment. Yet what are the implications of the online-offline-nexus for the identification of unknown risks?

Problems predicting “warning behaviours” in written communication

Risk assessment of terrorist behaviour usually looks at individual cases and considers a whole range of individual characteristics and behaviour such as “warning behaviour.” Ex-post-analyses of so-called lone offenders show evidence of such behaviour in written online communication prior to the attack. An example includes posting content that indicates the intention to use violence. Furthermore, linguistic features of future lone offenders differ from other social media users. For example, posts written by lone offenders show higher degrees of anger compared to other user groups. Following these findings, the so-called “Profile Risk Assessment Tool” (PRAT) was developed. The tool automatically identifies 30 variables, ranging from personality to risk behaviour from any given text such as individual social media profiles. The assessment is rooted in word dictionaries for various linguistic indicators. The scores are compared to a theoretical risk profile which is based on data for lone offenders, as well as to a large number of profiles including school shooters and users from various online sources such as Stormfront and Islamic Awakening. Although the model is able to distinguish lone offenders from randomly selected profiles, the actual number of politically or religiously motivated offenders is very small (N=11). The PRAT considers a large number of comparison groups, including average population and profiles of “radicalised” users who presumably were not involved in violent acts. In contrast, many technology-driven approaches build on simplified binary distinctions between radical and non-radical or extremist and non-extremist. Subsequently, machine learning classifiers are then trained to identify these binary distinctions.

Insights from PANDORA research regarding Islamist extremism

Given the paucity of evidence, the specificity and thus predictive value of linguistic markers for future violent extremist behaviour remains unclear and is generally limited to the case of lone offenders. The main reason for this is insufficient obtainable data. In the course of the PANDORA research project we have analysed the case files of 72 German jihadists who have been convicted for terrorism-related crimes between 2013 and 2018. Only a few case files include documentation of written communication by the perpetrator in online platforms or chatrooms. Furthermore, it needs to be considered that the individual’s use of social media may have changed during the trajectory of becoming involved in terrorism. Hence, the available data often reflects different phases of the process and is thus challenging to compare. We also observed that linguistic characteristics vary across different social media platforms or with other words: forms of communication vary and are enabled by these platforms, e.g .via private user profiles on Facebook or Telegram channels. We built a sample of 21 individual Facebook profiles of German speaking Islamist/Salafist users as well as 18 Telegram channels representing different branches of Salafism in Germany (mainstream, radical, jihadist) and collected an average of 165 postings for each. This revealed that individual profiles include considerably more offensive language and more use of expressive speech acts such as expressing emotions, wishes or attitudes compared to the Telegram channels. Given this, a study of linguistic markers for future violent behaviour needs to build on comparative types of communication in social media.

Automated monitoring of social media platforms?

Existing research does not provide an adequate empirical evidence base to identify future violent offenders based on their written communication on social media. However, more comprehensive empirical evidence will not allow automated detection for several reasons: One reason is the extremely small number (= low base rate) of future violent offenders within the population of (radicalised) social media users. Another is that even very good predictors produce high numbers of false positive results that need to be evaluated by human analysts. Given this, a good machine learning classifier could help reduce the amount of data and can support the analyst to focus on potentially security relevant data.

However, what does this mean for the future application of predictive tools in security practices? Each post suggested to be security relevant by a predictive model needs to be interpreted – for example: Does the user express a real intention to commit violence or is it just an empty threat and a case of verbal radicalism? A realistic evaluation of this type of question inevitably requires investigating the user in “real life”. Automated monitoring of social media platforms for future violent offenders would produce hundreds of investigation cases every day. Given the limited resources of security authorities, this approach does not seem realistic. Rather, the authorities experience difficulties in allocating and dealing with limited resources when it comes to “potential attackers” (German: “Gefährder”), e.g. returnees from Iraq or Syria. Instead of developing tools that scour the digital sphere for warning signals, it would be more beneficial to support police investigators in their daily (online) risk assessments and monitoring of known extremist individuals. Here, tools such as the PRAT may be promising as they support case specific monitoring and investigation.

Ethical and legal restrictions

There are also considerable legal and ethical limitations. Automated analyses of user generated content of everyone on social media for predictive purposes has both a “murky legal basis” and is ethically questionable. In the absence of suspicion, German police are only allowed to “patrol” the Internet but cannot perform systematic analyses of user generated content. In the course of the research project INTEGER we conducted group discussions with average citizens to investigate the acceptance of Internet monitoring. Unsurprisingly, participants showed a high acceptance for preventive Internet monitoring that is limited to known (violent) extremists, yet low acceptance for any kind of mass surveillance.

Robert Pelzer
Robert Pelzer ist Soziologe und Kriminologe am Zentrum Technik und Gesellschaft der TU Berlin, wo er den Forschungsbereich „Sicherheit – Risiko – Kriminologie“ leitet. Er forscht u.a. zu den Themen (De-)Radikalisierung, Polizeiarbeit im Internet sowie zu Fragen gesellschaftlich-ethischer Bewertung von Sicherheitstechnologien. // Robert Pelzer is Sociologist and Criminologist at the Centre for Technology and Society at TU Berlin, where he heads the research area "Security – Risk – Criminology". His research topics (de)radicalisation, policing in the internet and questions of societal-ethical assessment of security technologies.

Robert Pelzer

Robert Pelzer ist Soziologe und Kriminologe am Zentrum Technik und Gesellschaft der TU Berlin, wo er den Forschungsbereich „Sicherheit – Risiko – Kriminologie“ leitet. Er forscht u.a. zu den Themen (De-)Radikalisierung, Polizeiarbeit im Internet sowie zu Fragen gesellschaftlich-ethischer Bewertung von Sicherheitstechnologien. // Robert Pelzer is Sociologist and Criminologist at the Centre for Technology and Society at TU Berlin, where he heads the research area "Security – Risk – Criminology". His research topics (de)radicalisation, policing in the internet and questions of societal-ethical assessment of security technologies.

Weitere Beiträge zum Thema

The Challenges and Limitations of Online Counter-Narratives Since 9/11, policymakers and practitioners in the West traditionally employ a mix of hard- and soft-power approaches to counterterrorism. While kinetic measures such as targeted ki...
Verfolgung und Prävention von Hasskriminalität im Internet: Benötigt es „mehr Polizei“ in Sozialen Medien? Der Umgang mit Hassrede, Hasskriminalität und gefährdenden Radikalisierungsprozessen im Internet bildet eine zentrale Herausforderung der Kriminalpolitik im „digitalen Zeitalter“. ...
Social Media as a Mirror of External Circumstances: Insights into the Perception of a Radical Group Radicalisation processes take place in a field of tension between the actor and the outside world. External reactions and circumstances can have a supportive but also a rather nega...