“Just” Algorithms: Justification (Beyond Explanation) of Automated Decisions Under the General Data Protection Regulation

This paper argues that if we want a sustainable environment of desirable AI systems, we should aim not only at transparent, explainable, fair, lawful, and accountable algorithms, but we also should seek for "just" algorithms, that is, automated decision-making systems that include all the above-mentioned qualities (transparency, explainability, fairness, lawfulness, and accountability). This is possible through a practical "justification" statement and process (eventually derived from algorithmic impact assessment) through which the data controller proves, in practical ways, why the AI system is not unfair, not discriminatory, not obscure, not unlawful, etc. In other words, this justification (eventually derived from data protection impact assessment on the AI system) proves the legality of the system with respect to all data protection principles (fairness, lawfulness, transparency, purpose limitation, data minimisation, accuracy, storage limitation, integrity, and accountability). All these principles are necessary components of a broader concept of just algorithmic decision-making and is already required by the GDPR, in particular considering: the data protection principles (Article 5), the need to enable (meaningful) contestations of automated decisions (Article 22) and the need to assess the AI system necessity, proportionality and legality under the Data Protection Impact Assessment model framework. (Article 35).


Introduction
The regulation of automated decision-making (ADM) in the General Data Protection Regulation (GDPR) is a topic of vivid discussion. If the first commentators focused mostly on the existence of a right to an explanation in the body of the Regulation, the following discussion has focused more on how to reach a good level of explainability or, even better, a good level of algorithmic accountability and fairness. This paper argues that if we want a sustainable environment of desirable ADM systems, we should aim not only at transparent, explainable, fair, lawful, and accountable algorithms, but we also should seek for "just" algorithms, that is, automated systems that include all the above-mentioned qualities (transparency, explainability, fairness, lawfulness, and accountability). This is possible through a practical "justification" statement and process through which the data controller proves, in practical ways, the legality of an algorithm with respect to all data protection principles (fairness, lawfulness, transparency, purpose limitation, data minimisation, accuracy, storage limitation, integrity, and accountability). Indeed, as this article shows, all these principles are necessary components of a broader concept of just algorithmic decision-making. This justificatory approach might be not only comprehensive of many other fragmentary approaches proposed thus far in the legal or computer science literature, but also might solve many existing problems in the artificial intelligence (AI) explanation debate, for example, the difficulty in "opening" black boxes, the transparency fallacy, and the legal difficulties in enforcing a right to receive individual explanations. Section 2 mentions the general rules about profiling and ADM in the GDPR, while Section 2.1 refers to the debate about the interpretation of those rules. Then, Section 3 addresses the definition and limits of the concept of "explanation," and Section 4 tries to propose some preliminary, tentative solutions to those limits-adopting a systemic accountability approach. Developing upon these first elements, Section 5 introduces the concept of Algorithm Justification, while Section 6 contextualises this concept in the legal field and Section 7 in the GDPR field, explaining on which bases a "justificatory" approach is not only useful but also necessary under the GDPR. Further developing this challenge, Section 8 addresses how an ADM justification should be conducted, considering in particular the data protection principles in Article 5. Finally, Section 9 proposes a practical "justification test" that could serve as a first basis for data controllers who want to justify ADM data processing under the GDPR rules.

The GDPR Rules about Automated Decision-Making
The GDPR has tried to provide a solution to risks of automated decision-making through different tools: a right to receive/ access meaningful information about logics, significance, and envisaged effects of automated decision-making processes (Articles 13(2), lett. f; 14(2), lett. g; and 15(1), lett. h) and the right not to be subject to automated decision-making (Article 22) with several safeguards and restrains for the limited cases in which automated decision-making is permitted. Article 22(1) states as follows: "the data subject shall have the right not to be subject to a decision based solely on automated processing, including profiling, which produces legal effects concerning him or her or similarly significantly affects him or her." This right shall apply almost always in case of sensitive data 1 (Art. 22 (4)). For other personal data, ADM shall not apply in only three cases: a. the decision "is necessary for entering into, or performance of, a contract between the data subject and a data controller"; b. "is authorised by Union or Member State law to which the controller is subject and which also lays down suitable measures to safeguard the data subject's rights and freedoms and legitimate interests"; or c. "is based on the data subject's explicit consent" (Art. 22 (2)).
In cases (a) and (c) from the above list "the data controller shall implement suitable measures to safeguard the data subject's rights and freedoms and legitimate interests, at least the right to obtain human intervention on the part of the controller, to express his or her point of view and to contest the decision" (Art. 22 (4)). In addition, recital 73 explains that such suitable safeguards "should include specific information to the data subject and the right to obtain human intervention, to express his or her point of view, to obtain an explanation of the decision reached after such assessment and to challenge the decision." In sum, in case of a "decision based solely on automated processing, including profiling, which produces legal effects concerning [data subjects] or similarly significantly affects [them]," individuals have two different safeguards: 1. The right to know the existence of that processing and meaningful information about its logic, significance, and consequences. 2. The right not to be subject to that processing, unless in specific cases (precontractual processing, explicit consent of data subjects) where other appropriate safeguards must be provided, such as (at least): a. the right to obtain human intervention from the controller; b. the right to express his or her point of view; c. the right to contest the decision (or "challenge" it, as referred at recital 73); d. eventually, the right to "obtain an explanation of the decision reached after such assessment." However, this right is not included in the body of Article 22, but only in the explanatory recital 71.

Debate and Interpretations
The interpretation of the GDPR rules about automated decision-making has generated a vivid debate in legal literature. Several authors have interpreted this net of provisions as a new right to algorithm explanation 2 ; while other scholars have adopted a more sceptical approach analysing limits and constraints of the GDPR provisions 3 and concluding that the data subject's rights are more limited than expected and that there is no right to explanation. 4 In addition, other scholars have preferred a contextual interpretation of Articles 13(2)(f), 14(2)(g), 15(1)(h), and 22 of the GDPR, suggesting that the scope of 1 For sensitive data we refer to "special categories of personal data" according to Article 9(1). This exemption does not apply in case of point (a) or (g) of Article 9(2) (i.e., sensitive data given with explicit consent of data subject or processing necessary for reason of substantial public interest) when "suitable measures to safeguard the data subject's rights and freedoms and legitimate interests are in place." 2 Bryce Goodman and Seth Flaxman, 2016 those provisions is not so limited and that they actually can provide individuals with more transparency and accountability. 5 This last view was also partially confirmed by Article 29 Working Party, which has released some guidelines on profiling and automated decision-making. 6 However, scholars have proposed different ways to address the issue of suitable safeguards, as indicated in Article 22(3). To mention a few examples, some scholars proposed a model of counterfactual explanations, that is, a duty to clarify for individuals targeted by automated decisions, among others, "what would need to change in order to receive a desired result in the future, based on the current decision-making model." 7 Other scholars proposed a "legibility model" for automated decisions, where transparency and comprehensibility should 8 exist so that individuals are able to understand autonomously (readability) the importance and implications (comprehensibility) of algorithmic data processing. 9 Looking at the broader picture of the GDPR, some scholars proposed a more dynamic link between existing data protection rights (access, erasure, rectification, portability, etc.) in order to react to adverse effects of automated decisions 10 or to also focus on the dualistic nature of the GDPR, based both on individual rights and on a multilevel design of algorithms (co-governance). 11 A corollary of this proposal is a system of multilayered explanations based on a data protection impact assessment of algorithms. 12

Definition and Limits of "Explanation" of AI
Most of these proposed solutions are based on transparency, explainability, and interpretability of artificial intelligence. 13 In general terms, explaining automated decision-making is a complex task. Many commentators have first questioned about the notion of explanation. 14 In general terms, explaining means making an idea or a situation clear to someone by describing it in more detail or revealing relevant facts. 15 In other words, the explanation is an act of spotting the main reasons or factors that led to a particular consequence, situation, or decision.
In the field of Computer Science, explanation of AI has been referred to as making it possible for a human being (designer, user, affected person, etc.) to understand a result or the whole system. 16 Tim Miller, analysing the structure and expectations of explanations, identified four characteristics of explanations: 17 (a) contrastive, i.e., mostly in response to some counterfactuals; 18 (b) selected, i.e., not comprehensive, but based only on the few main factors that influenced the final decision; (c) causal, rather than correlational/statistical; and (d) social and contextual, i.e., depending on the specific social relations and contexts at stake. As affirmed in legal theory, an explanation attempts to render a situation or a process understandable under a causal or intentional perspective. 19 The causal nature of explanation is based on the link between cause and effect (What are the causes behind this decision?); while its intentional nature is based on the motives of the actor and her beliefs regarding reality (What are purposes or intentions behind this decision?). Considering these two sides of the coin, the explanation is the "answer to the question of why something happened or why someone acted as he did." Said in other terms, an explanation is a framework for understanding the action that has happened. 20 The GDPR (and, in particular, the provisions in Article 22 and recital 71) are often interpreted as referring to only "one" kind of explanation. Actually, there is no unique explanation in practice; 21 each form of explanation highly depends on the context at issue. 22 More importantly, the capability to give a fair and satisfactory explanation also depends on the possibility to show causal link between the input data (and, in particular, some crucial factors within the input information) and the final decision. However, this is not always possible. While for traditionally data-based decision-making it might be easier to give adequate explanations, addressing the causes, the determining factors, and the counterfactuals; in more complex AI-based decisions, it might be hard to reach this high level of explainability. Indeed, looking at the quick development of deep learning in different forms of automated decisions (even COVID-19 automated diagnosis based on, for example, lungs images), explaining the specific reasons and factors of an individual decision might be nearly impossible. 23 An explanation which is neither causal nor contextual is perhaps inadequate to show to the data subject eventual grounds for challenging the decision and then is unsuitable under Article 22(3) of the GDPR. These last considerations may lead to an insurmountable dichotomy: either we prohibit more technologically advanced and inscrutable decision-making systems because they cannot comply with the GDPR explainability requirements; or we tolerate AI-based decision-making systems that do not formally respect the transparency duties in the GDPR.
In addition, explanations are not only sometimes problematic, but also not sufficient to make AI socially and legally "desirable." In particular, several scholars reflected upon the "transparency fallacy" of algorithmic explanation, 24 that is, the risk that even a meaningful explanation could be not effectively received or understood by the data subjects (due to its technical nature or to the limited attention, interests, or-even temporary-cognitive capabilities of the data subject). 25

The Shift from Explanation to the Bigger Accountability Picture
To overcome the above-mentioned limits of AI explanation, a possible solution might be to look at the broader picture of the GDPR. Article 22(3) and recital 71, when mentioning the possible measures to make automated decisions more accountable, do not address only the right to an individual explanation, but several other complementary tools (e.g., the right to contest and the right to human involvement and algorithmic auditing). In particular, there are several principles and concepts that might influence the interpretation of accountability duties also in case of algorithmic decision-making: the fairness principle (Article 5(1)(a)), the lawfulness principle (Article 5(1)(a)), the accuracy principle (Article 5(1)(d)), the risk-based approach (Articles 24, 25, 35), and the data protection impact assessment model (Article 35). Also looking at these last provisions, a justification of automated decisions taken is not only more feasible but also more useful and desirable than an explanation of the algorithm. 26 Justifying a decision means not merely explaining the logic and the reasoning behind it, but also explaining why it is a legally acceptable (correct, lawful, and fair) decision, that is, why the decision complies with the core of the GDPR and is, thus, based on proportional and necessary data processing, using pertinent categories of data and relevant profiling mechanisms. This justification process will be addressed in the next section. However, at this moment we can already affirm that justification and explanation are not necessarily in conflict with each other. When explanations are not satisfactory or feasible, the data controller should implement some complimentary accountability tools. 27 In a previous paper, the author and a co-author proposed to disclose meaningful information about a Data Protection Impact Assessment (DPIA) on the algorithmic decisionmaking system-the DPIA, as mentioned in Article 35 of the GDPR, is a process to assess and mitigate the impact of data processing operations on fundamental rights and freedoms of data subjects. 28 This paper, in addition to that proposal, introduces a practical description of a possible justification test on the algorithm, where the data controller explains why the algorithm (analysed on the aggregated final effects on different data subjects, but also analysed in its purposes, intentions, etc.) is not unfair, unlawful, inaccurate, beyond the purpose limitation, and so forth.

Justification Beyond Explanation of ADM
Before describing the practicalities of a possible justification model and before exploring the advantages of this approach, it is useful to understand what justification means in general as well as in the legal field, particularly with regard to data protection. In general terms, a justification is an action to prove or show something (a person, an action, opinion, etc.) to be just, right, desirable, or reasonable. 29 Actually, the meaning of justification acquires different shadows in different fields. For example, in theology the justification is the action of declaring or making "righteous" in the sight of God. 30 Similarly, in philosophical terms, the justification of decision-making that affects human agents and human societies means proving (under a utilitarian or deontological basis) whether a theory or an opinion reaches desirable goals according to the accepted values (utilitarian, deontological, etc.). 31 In scientific terms, justifying means proving that a theory or a statement is correct and verified through the scientific method. 32 While the explanation, as mentioned above, aims to make humans understand why a decision was taken, a justification aims at convincing that that decision is "just" or "right" (following the different benchmarks of rightness in different fields). 33 In different terms, while explanations are descriptive and intrinsic because they only depend on the system itself, justifications are normative and extrinsic because they are grounded on external references, namely a "norm" according to which we can assess the validity of the decision. 34 This means that a justification requires two elements: (1) the reference norm and (2) the proof that that case or decision applies to that norm. The proof can follow logical reasoning standards, while the "norm" depends on the specific context at issue. As shown above, the norm can be based on theological, philosophical (utilitarian, deontological, etc.), scientific (scientific method) and, of course, legal grounds. Indeed, in legal terms, justification means proving that a certain action or act respects the current law and, more generally, the legality principle. 35 Actually, as Loi and colleagues argue, 36 the two-dimensional justification that we mention above (norm and proof) should be of a hybrid nature. In particular, the norms also can be from different sources (e.g., utilitarian and legal): a decision-maker can justify a decision on her "primary goals" based on utilitarian norms, (i.e., business objectives), but she also is asked to justify her decision on "constraining goals" imposed by law and, thus, based on legal norms (or other ethical values), such as privacy, fairness, and so forth. 37 Justifying a decision on the primary goals aims to show that the decision is not morally arbitrary, while justifying it on the constraining goals aims to prove the legality of that decision. 28 Kaminski and Malgieri,(n 12). 29 Lexico, "Justification," https://www.lexico.com/definition/justification (accessed on 25 November 2020).

The Legal Approach to Justification
Returning to the notion of legal justification, scholars proposed different approaches to it, 38 in particular observing judgments and the reasoning behind judicial acts, which have a function for appeal. 39 In general terms, there are legal positivist approaches (the valid law in itself is a sufficient justification) and more balanced ones (a justification lies on a balance between the letter of the law and other grounds having significance in the decision-making). 40 A more balanced approach might better solve different issues related to the law's open nature and the defeasible nature of legal justification (if additional information is taken into account, the status of a conclusion can change). 41 These considerations also are evident in criminal law, where the "justification" is an exception to the prohibition of committing certain offenses that renders a nominal violation of the criminal law lawful and therefore exempt from criminal sanctions. In doing so, such a justification balances a general legal norm with other contextual interests at issue. 42 In sum, while an explanation tends to clarify only why a decision was taken (on which "primary goals," and on which practical interests and needs it was taken), 43 a "legalistic" justification usually tends to focus on the mere written law, without a contextual consideration of the balance of interests. Both these approaches appear incomplete to our purposes (justification of algorithmic decisions). A desirable justification should not merely show the compliance with the "law," but with the core or essence of the legal principles, that is, with the legality principle. 44 As we will argue below, the core of data protection in the GDPR is summarised in the data protection principles in Article 5. Accordingly, justifying an automated decision-making under the data protection goals and norms means, at the least, showing the respect with the principles of data protection in Article 5.

Justification in the GDPR: On Which Basis It Might Be Requested (or Encouraged)
In the GDPR we observe several references to justification of data processing in general, and of automated decision-making in particular. In different parts of the GDPR, when there is a prohibition (e.g., the prohibition to repurpose the data processing, as stated in Article 5(1)(b); the prohibition to process sensitive data, as stated in Article 9(1); the prohibition to conduct automated decision-making, as stated in Article 22(1); the prohibition of transferring data outside the European Union, as mentioned in Article 44, etc.), there is always a list of exceptions, often accompanied by some safeguards to protect fundamental rights and freedoms of the data subject. This combination of exceptions and safeguards is the basis of what we can consider a justification. In addition, in these cases the GDPR often refers to the "principles of data processing" as the overarching norm or goal with which the data controller needs to comply in order to justify the legality of some nominally illegal acts (see, e.g., recital 72 about profiling or recital 108 about data transfer). We might observe another strong example of justification in the GDPR: it is the case of high-risk data processing (Article 35). Under the Data Protection Impact Assessment (DPIA) model, data controllers must prove the legal proportionality and necessity of the data processing, and thus the legal necessity and proportionality of eventual automated decisions taken (Article 35 (7)(d)). This may constitute a form of justification of data processing on the basis of legality and legitimacy, aiming at the "essence" of data protection. 45 In addition, the Article 29 Working Party Guidelines on profiling recommend that data controllers (in order to comply with Articles 13-15) explain the pertinence of categories of data used and the relevance of the profiling mechanism. 46 Assessing whether the data used are pertinent and the profile is relevant for a decision, as well as assessing the necessity and proportionality of the data processing in an automated decision-making system, seems to constitute a call for justification. The purpose of such assessment is not only transparency about the technology and its processes, but a justification about the lawfulness, fairness, necessity, accuracy, and legitimacy of certain automated decisions. 47 Interestingly, empirical research revealed that justification of algorithms (defined as showing the fairness of goals and rationales behind each step in the decision) is the most effective type of explanation in changing users' attitudes toward the system. 48

The Grounds for Algorithmic Justifications in the GDPR: The Principles in Article 5
While some scholars have already addressed the need for justification of automated decision-making (rather than a mere need for explanation), very few authors have tried to clarify what this ADM justification should be and how it should be conducted under the GDPR rules. This article argues that, considering the meaning of "legal justification," as mentioned in the previous sections, justifying an algorithmic decision should prove the legality of that decision. For "legality," we mean not just lawfulness, but also accountability, fairness, transparency, accuracy, integrity, and necessity. In recent years, scholars have called for fair algorithms, 49 or accountable algorithms, 50 or transparent algorithmic decisions, 51 or, again, for lawful, accurate, and integrous automated decisions. Justifying ADM means calling for algorithmic decision processes that prove to have all the aforementioned characteristics and respect the essence or the core of data protection. 52 The author argues that the essence of data protection in the GDPR consists of the data protection principles cited in Article 5. Accordingly, justifying automated decisions means proving that they comply (or adjusting them in order to comply) with the data protection principles in Article 5. Interestingly, the principles of data protection seem to lead to the desirable characteristics of automated decision-making, as mentioned above. We will now analyse them one by one, contextualising them to the case of algorithmic decision-making. Article 5(1)(a) refers to lawfulness, transparency, and fairness. As regards to lawfulness, automated decision-making should be lawful, that is, it should have a legal ground and respect fundamental rights and freedoms. Such a legal basis can be found not only in Article 6(1) (or in Article 9(2) in case of special categories of personal data), but also in Article 22. Since Article 22(1) is interpreted as a prohibition of automated decision-making, 53 in order to make it lawful it is necessary to prove that one of the exceptions in Article 22(2) (consent, contract, European Union, or national law) applies, with the related requirements in Article 22(3) (suitable measures to safeguard the data subject's rights, including at least the right to human intervention, to express his or her point of view, and to contest the decision). This part of "justification" is the most formal one: The controller needs to justify why an activity which is apparently unlawful (profiling individuals or making significant decisions on automated bases) is instead lawful. In this sense, this part of justification refers to the legal justification in criminal law, as mentioned above. 54 As regards to fairness justification, the data controller should prove that the decision-making processing is fair, that is, it is nondiscriminatory, unbiased, nonmanipulative, and that, in general, it does not exploit a significant imbalance between the controller and the subject in particular contexts (vulnerable individuals). 55 In general, the algorithmic processing should not violate the expectations of the subjects, 56 and its effects should not impair human dignity, autonomy, safety, or other fundamental rights set out in the EU Charter of fundamental rights. 57 As regards to transparency justification, the data controller should prove that the algorithmic processing is legible 58 in the sense that, at least, meaningful information about the logic, the significance, and envisaged consequences of the decision-making are communicated to the subject at the beginning of the data processing (Articles 13(2)(f) and 14(2)(g)) and, upon request, after the processing has started (Articles 15(1)(h)). As argued in another article, 59 there are at least three levels of possible transparency: general (or "global") information, group-based explanation, or individual (or "local") explanation (implementing recital 71). Each level of transparency should depend on the level of risk of that algorithmic decision-making process. 60 This multilayered approach already has been discussed and endorsed in the field of computer science. 61 Article 5(1)(b) refers, then, to purpose limitation. According to this principle, the justification should also prove that the ADM system is based only on data collected for the specific (licit and declared) purpose of obtaining an automated decision affecting the data subject. Under a broader perspective, the purpose limitation justification also should clarify that the algorithm was not originally developed for other purposes (military, commercial, etc.) and then eventually repurposed for the processing at stake. 62 This would help to prevent algorithmic biases based on a decontextualisation of algorithms. 63 Article 5(1)(c) mentions the principle of data minimisation. Under this principle, the justification of the data controller should prove that the ADM is based on the processing of only data that are adequate, relevant, and limited to what is necessary for the purpose of taking that automated decision. To present an example, if the controller is an employer who needs to hire a new employee and she declares that the automated decision-making processing has the purpose of selecting the worthiest candidate, any information about, for example, the sexual orientation, ethnic origin, religion, or the possibility of taking maternity leave (fertility, marital status, etc.), are unnecessary and should not be collected. This might be a way also to prevent intentional discrimination 64 hidden through "masking" 65 -when the data controller tries to cover intentional discrimination behind the shield of data analytics. In those cases, the data minimisation justification could be helpful. At the same time, it is helpful when the processed data are not explicitly about protected categories of information but also could reveal information that might potentially lead to discrimination. 66 Article 5(1)(d) refers to data accuracy. When justifying ADM, accuracy is also fundamental. The data controller should prove that the algorithmic decision is correct and accurate. Recital 71 (addressing ADM) requires data controllers to make sure "that factors which result in inaccuracies in personal data are corrected and the risk of errors is minimised." 67 Indeed, accuracy (of input data and of the final product decision) has generally been considered one of the main elements to justify the use of certain algorithms. 68 WP29 has referred to inaccuracy as one of the main issues of automated decision-making, since these errors in data or in the ADM process itself might result in "incorrect classifications; and assessments based on imprecise projections that impact negatively on individuals." 69 To make a practical example, the European Bank Authority, in its report on advanced analytics, has given great importance to data accuracy for justifying algorithms in the bank sector and has developed that concept through different subconcepts: accuracy and integrity, timeliness, consistency, and completeness of data. 70 The accuracy justification should result not only in proving the accuracy of input data, but also proving that the chosen algorithm is fit-for-purpose, that is, produces accurate results. Indeed, often discriminatory decisions also are inaccurate and incorrect. 71 Empirical studies also confirm that the "usefulness" of an algorithmic decision is a key component in its social acceptance. 72 Article 5(1)(e) mentions the principle of storage limitation. Although in the field of ADM this principle seems not so pertinent, its function is important. This principle requires that data should be stored for no longer than necessary for the purpose of the processing: This time limitation also should apply to algorithmic decision-making. In other words, ADM should not be based on data that are no longer necessary (e.g., outdated data) for the purpose and the context of the decision. At the same time, controllers should not use algorithms that are no longer necessary for the declared purposes. Article 5(1)(f) mentions the principle of integrity and confidentiality. In the context of ADM, it is central that algorithmic decisions are integrous and do not lead to cybersecurity risks that could adversely affect the safety (or any other fundamental right or freedom) of the data subject. Recital 71 also indirectly refers to these "risks" when mentioning automated decisions. However, cybersecurity, safety, and integrity are central elements to consider when justifying algorithms. A "just" algorithm is based on and produces integrous data, and does not endanger the (digital or physical) safety of the data subject. 73 The EU Parliament approved a resolution accorind to which any high-risk artificial intelligence and related technologies, including "algorithms and data used or produced by such technologies, shall be developed, deployed and used in a manner that ensures that they are: developed, deployed and used in a resilient manner so that they ensure an adequate level of security by adhering to minimum cybersecurity baselines proportionate to identified risk, and one that prevents any technical vulnerabilities from being exploited for malicious or unlawful purposes." 74 The last principle in Article 5 is accountability (Article 5(2)). Accountability of ADM is an overarching goal that is considered the final objective of legally desirable AI, in particular in the data protection framework. 75 This is a "meta-principle," that is, a methodology to apply and implement all the other data protection principles in Article 5. We can identify two perspectives of accountability justification in the GDPR: a practical perspective and a methodological one. The practical accountability justification should demonstrate that the data controller has proactively implemented some suitable ADM measures under Article 22(3) and recital 71, 76 that she is ready to make data subjects exercise their ADM-related rights (within and beyond Article 22), and that those rights are effective-the right to contest the algorithm, for example, should be made effective through clear information about the system 77 and the decision, and there should be concrete technical or organisational steps to take into account the eventual data subjects' contestation, to comply with it or to explain why such a request is unreasonable. 78 On the other hand, the methodological perspective of accountability indicates how the justification should be conducted, that is, how the justificatory auditing should be carried out (see below) and what the legal approach to justification should be. In particular, the accountability principle-as Article 5(2) indicates-put the burden of proving data processing compliance on the data controller. 79 This means that there is a rebuttable presumption (praesumptio iuris tantum) that the data processing activity at stake-and, thus, any ADM processing as well-is not compliant with the data protection principles. The burden of proof about legality is on the data controller. 80 In other terms, we should consider that algorithmic decisions are illegal by default, unless the data controller justifies them through a valid justification process, meant both as a process of justificatory auditing and an eventual final justification statement.

The ADM Justification Test
After having explained what the ADM should be, in terms of content and approach, this section is going to present a possible practical example of a justification test (and the related justification statement for data subjects and Data Protection Authorities). In a previous paper, the author and a co-author proposed a "legibility test" for automated decisions under the GDPR. 81 Other scholars have proposed an Algorithmic Impact Statement 82 for accountable decision-making on personal data. This paper, with the aim to include those different experiences in a broader perspective, proposes an algorithmic justification statement, based on a justification test. Such a justification test might act as an initial framework for conducting Algorithmic Auditing under a legal (and not merely technical) perspective. This test also might be the basis for conducting DPIA on automated decision-making, in particular for what concerns the vague reference to the "assessment of the necessity and proportionality of the processing operations in relation to the purposes" (Article 35 (7)(b)). The WP29 Guidelines on DPIA have proposed some criteria for an acceptable DPIA. These guidelines explain that the "assessment of the necessity and proportionality of the processing operations" also implies a good and comprehensive implementation of -inter alia-data protection principles in Article 5 (namely the lawfulness, purpose limitation, data minimisation, and storage limitation principles). 83 Following the structure of Article 5 and the framework discussed in the previous section, a possible ADM Justification test might be as follows: 1. ADM lawfulness justification: a. Does the ADM data processing have a lawful basis under Article 6(1)? b. Is the ADM based on or produce special categories of data? i. If yes, does the ADM data processing have a lawful basis under Article 9(2)? c. Is the ADM based on one exception in Article 22(2)? d. Is the ADM equipped with suitable safeguards as required in Article 22(3) and with the safeguards eventually required by Member States legislation? 2. ADM fairness justification: a. Is the ADM nondiscriminatory? How can the controller ensure the nondiscriminatory nature of the ADM result? Does the controller employ anti-discriminatory auditing on a regular basis (e.g., on a statistical basis)? b. Is the ADM nonmanipulative? How can the controller ensure the nonmanipulative nature of the ADM result? Does the controller employ anti-manipulative auditing on a regular basis (e.g., on a statistical basis)? c. Does the ADM exploit individual vulnerabilities through excessive power imbalance? How can she ensure that this does not happen? 3. ADM transparency justification: a. Has the controller provided meaningful information about the logic, significance, and the envisaged consequences of the ADM? b. Is the controller ready to provide a clear general, group-based, or individual explanation to enable the data subject to challenge the decision?

ADM purpose-limitation justification:
a. Is the purpose of the decision-making processing licit, clearly determined, and declared to the subject? b. Is the ADM processing based on data collected solely for that declared purpose? c. How does the data controller ensure that re-purposing of data is avoided in that ADM system? d. Was the ADM developed for other different purposes? e. Was the ADM eventually trained on data collected for other purposes? f. If the answer to points d) and/or e) is yes, how does the controller ensure that the ADM processing has been adjusted in order to avoid biases? 5. ADM data minimisation justification: a. Is the ADM based solely on data that are adequate and necessary for the declared purpose of the automated decision? b. Does the ADM system produce decisions that are strictly necessary for the declared purpose and the context? 6. ADM accuracy justification: a. Is the ADM based on accurate data? b. Does the ADM produce accurate results? c. How can the data controller ensure that points a and b are respected (e.g., on a regular basis)? 7. ADM storage limitation justification: a. Is the ADM based solely on data that are still necessary (e.g., not outdated data) for the purpose and context and the decision? b. Is the ADM processing based on algorithms that are still necessary for the declared purposes? 8. ADM integrity and confidentiality justification: a. Is the ADM based on integrous data? b. Is the ADM processing resilient enough to protect digital and physical safety of data subjects? c. Are cybersecurity risks adequately assessed and mitigated? 9. ADM accountability justification: a. Are all data protection safeguards and rights (related to ADM) adequately implemented? b. Are these rights made effective? For example, is the right to challenge the decision enabled by clear explanation to the subject? Are there organisational steps to "put a human in the loop"? Are there organisational steps to comply with a challenge request?

Conclusion
In recent years, legal scholars and computer scientists have discussed widely how to reach a good level of AI explainability, algorithmic accountability, and fairness. This paper argues that, in the field of data protection, the GDPR already proposes a sustainable environment of desirable ADM systems, which is broader than any ambition to have "transparent," "explainable," "fair," "lawful," or "accountable" ADM: We should aspire to just algorithms, that is, justifiable automated systems that include all the above-mentioned qualities (fairness, lawfulness, transparency, accountability, etc.). This might be possible through a practical "justification" process and statement through which the data controller proves, in practical ways, the legality of an algorithm with respect to all data protection principles (fairness, lawfulness, transparency, purpose limitation, data minimisation, accuracy, storage limitation, integrity, and accountability). This justificatory approach might also be a solution to many existing problems in the AI explanation debate, for example, the difficulty of "opening" black boxes, the transparency fallacy, and the legal difficulties in enforcing a right to receive individual explanations.
After an overview of the GDPR rules (Section 2) and of the definition and limits of AI explanations (Section 3), this article has proposed a wider systemic approach to algorithmic accountability (Section 4). In order to do that, the concept of justification is introduced and analysed both in general (Section 5), and in the legal and data protection fields (Sections 6 and 7). This article argues that the justificatory approach is already required by the GDPR rules. Accordingly, Sections 8 and 9 explain why and how the data protection principles could be a meaningful basis to justify ADM systems.