Within just a few years, the Varieties of Democracy (V-Dem) project has experienced a remarkable rise to both academic and political prominence. As I show in a paper that was just published open access with Contemporary Politics, this rise has been accompanied by a notable discursive shift: Having started as a project aimed at taking seriously the essential conceptual contestability of democracy, in recent years V-Dem has adopted an increasingly narrow and taken-for-granted focus on liberal democracy. This turn from the contestation to the decontestation of democracy, which responds to the perception of serious threats to democracy in general and liberal norms in particular, is not only remarkable in and of itself. In the face of the current crisis of democracy, it is also deeply problematic as it contributes to downplaying the inherent limitations of liberal democracy. The following contribution presents and summarizes the main arguments from the paper.
Schlagwort: Forschungsmethoden
Web Scraping Social Media: Pitfalls of Copyright and Data Protection Law
The increasing popularity of web scraping methods does not come without a plethora of legal questions. In our first article, we analyzed the growing popularity of web scraping methods and how the Terms of Service of the social media platforms relate to this issue. In this article we discuss further questions of copyright law and data protection law regarding web scraping. The German legal situation in copyright law is discussed as an example here.
Web Scraping Social Media: Legitimate Research or a Breach of Contract?
To make full use of the massive amounts of social media platform data for the purposes of scientific research, data is increasingly obtained using data collection methods such as web scraping. Web scraping methods make it possible to automatically access and retrieve information directly from social media web interfaces and other websites. The technical process requires two main steps: First, the website is accessed with the assistance of a webbot or a webcrawler. Second, the information is analyzed automatically and extracted, if necessary.