Project description

The increasing relevance of automated decision-making around Europe and beyond

Automated decision-making represents a central feature of administrative decision-making which has progressively acquired greater importance within the latter

Automated decision-making represents a central feature of administrative decision-making which has progressively acquired greater importance within the latter. It is generally recognized that automation can generate important benefits in terms of efficiency, given the fact that the use of algorithm can provide faster decisions and avoid subjective bias fostering objectiveness. However, it can also generate important risks under more than one aspect, especially as far as individuals’ due process rights are concerned. Transparency, too, is problematic. For instance, how can individuals ‘participate’ in a decision that is automated? How is transparency guaranteed vis-à-vis an algorithm? Are the source codes accessible? How is the giving reasons requirement shaped in automated decisions?

The relevance of the topic thus clearly emerges, as also showed by the fact that several cases reviewing the compatibility of algorithm-based decision-making with the right to a fair trial and principles of transparency have already been brought before the courts in many countries. Also, many legislators are currently intervening in the field, both at the national and at the supranational level.

The reasons for a comparative study

What is still lacking – however – is a thorough comparative study on developments that are emerging at the national level. Such an analysis would allow to understand whether the problems faced by modern legal systems are fundamentally similar, if not the same, as well as whether the solutions, too, are similar or reveal differences, deriving from tradition or policy preferences.

Precisely with the aim to fill this gap, by comparing rules and decisions on the use of algorithms in administrative decision-making in selected countries, this project aims to answer a twofold question: (1) whether the problems and the risks generated by the use of algorithm in administrative decisions are similar in different legal systems and (2) which solutions the same legal systems adopt to guarantee individual’s procedural rights vis-à-vis automated administrative decision-making. This in turn requires to pose and answer many other specific questions, including the following: which are the dangers created by automated decision-making processes in administrative action? Which of the procedural principles currently applied to non-automated administrative procedures are put in danger or made more slippery by automated procedures? When does technology affect the most individuals’ rights and in which procedures in particular? Do the general legal principles that control the machinery of government in most, if not all, national legal systems (such as impartiality, transparency, and the protection of legitimate expectations) also apply to automated administrative decision-making processes? Assuming an answer in the affirmative, how are these principles applied? Which technicalities, procedures, and rules affect the actual implementation of these principles, and with what results? Answering these questions appears to be particularly urgent.

The Common Core methodology

In more operative terms, the analysis of these aspects will be carried out by applying a path-breaking methodology, already applied to the administrative law field by the ERC-awarded CoCEAL project, in which the PI – Angela Ferrari Zumbini – has lead one of the two lines of research: the Common Core methodology.

The latter, originally associated with private law because of the cutting-edge work by Mauro Bussani – who also participates in this project – and Ugo Mattei, joined by many other scholars coming from European and non-European legal systems – including two members of this project, namely the leader of the University of Trieste unit, Marta Infantino, and Camilla Crea -, had been indeed also successfully applied to administrative law, in order to verify what principles are common and what are not across European legal systems, and how they are understood and implemented under concrete circumstances. As anticipated, this strand of research had been funded by the European Research Council through an Advanced Grant – the latter awarded to Giacinto della Cananea, who also participates in this project – and involved several administrative law scholars, including the PI and the leader of the Tor Vergata unit, Martina Conticelli. The website of the CoCEAL Project is www.coceal.it

Applying the Common Core methodology to automated administrative decisions

In the light of the fact that automated administrative decisions, as already pointed out, are emerging, the aim of the present project is extending the Common Core methodology to investigate this field, in particular in order to examine how selected legal systems are balancing the need of promoting efficiency and objectivity (which might be fostered by the use of algorithms) with the protection of individuals’ rights. Indeed, if rules regulating the administrative procedure are always in search of a balance between legalism and pragmatism, between efficiency and due process, in the case of automated administrative decision-making the need for this balance is even more acute.

In particular, in order to address these questions and to examine the relationships between the legal systems included in the analysis, the research group prepared a questionnaire with seven factual cases concerning automated administrative decisions, considering in each case if and how the individuals’ rights are guaranteed and enforced – the focus being on typical procedural rights, such as the right to be heard during the procedure, the duty of the Public Administration to give reasons, the right to access to file –, with the aim of understanding how are those rights shaped when an individual is confronted with an algorithm managed by the Public Administration.

Alongside these practical cases, the latter playing a key role in a comparative inquiry based on a factual analysis, the questionnaires also contain, in order to provide a more general framework, six questions about the state of the art of both legislation and doctrinal research on the reliance on algorithms by public bodies and on automated decision-making in general.

Legal Systems considered for comparison

The project will compare 23 legal systems. One or more national experts from each legal system will answer our questionnaire.

  • Albania Eralda Met’hasani Cani, Tirana University
  • Austria: Matthias Zußner, University of Graz
  • BulgariaSilvia Tsoneva, New Bulgarian University; Darina Zinovieva, Plovdiv University
  • China: Xixin Wang, Peking University Law School
  • Croatia Dario Đerđa & Dana Dobrić Jambrović, Rjeka University
  • Czech Republic: Filip Křepelka, Masaryk University
  • Estonia: Katrin Nyman Metcalf, Tallin University of Technology
  • European Union: Barbara Marchetti, University of Trento
  • France: Maximilien Lanna, University of Lorraine
  • Germany: Christina Fraenkel-Haeberle, University of Speyer
  • Hungary:Mezei Péter & Erzsébet Csatlós, University of Szeged
  • Italy: Diana-Urania Galetta & Stefano D’Ancona, University of Milan
  • Latvia Edvīns Danovskis, University of Latvia
  • Lithuania:Goda Strikaitė & Jurgita Paužaitė-Kulvinskienė, Vilnius University
  • Netherland: Jacobine van den Brink & Louise Verboeket, University of Amsterdam
  • Poland: Piotr Tereskiewicz, Jagellonian University; Monika Namysłowska, Universityof Łódź
  • Romania:Dacian C. Dragos, Babes Bolyai University; Călin Ioan Rus, Hasselt University
  • Serbia Marko Milenkovic, EUI
  • Slovenia Damjan Mozina, Ljubljana University
  • Spain: Agustì Cerrillo, University of Catalunya
  • Turkey Pinar Çağlayan Aksoy, Bilkent University
  • UK: Gordon Anthony, University of Belfast
  • USA: Catherine Sharkey, New York University