Homo ex machina. Artificial Intelligence in a Brave New World—A Glimpse from Europe

There is no area of human activity that would function today without the involvement of AI technologies. AI, a driving force of the Fourth Industrial Revolution, penetrates modern human life with ever-increasing intensity. As it is claimed, it brings about decision-making efficiency and performance accuracy that has never existed before. At the same time, artificial intelligence creates far-reaching dangers to almost every aspect of the modern world. From the legal point of view, the most eminent ones include: the bias stemming from the inherited historical data that AI tends to repeat and multiply, the problem of the lack of transparency (the “black-box society”) that makes it impossible for a human being to understand how AI arrived at a decision, and the elimination of the human being from the decision-making process, which rises the issue of appropriateness and legitimacy. AI can only be developed when sufficient amount of data is available, and hence the article elaborates on the fundamental role that data plays for the modern world, and how transnational corporations set global rules in relation to collection and access to data in a process that lacks democratic accountability and questions the position of states. The article ends with presenting the UE attempts to address the issues that arise in relation to the unrestricted access to data, invasion of privacy and threats posed by artificial intelligence. It also elaborates on the prominent position that the artificial intelligence holds on the EU political agenda and presents political and legislative actions undertaken by the European Union in this regard.


The New World of Technological Advancement
These days, artificial intelligence (AI), a collection of technologies that combine data, algorithms, and computing power, 1 is everywhere and does everything. There is no area of human activity that would function today without the involvement of AI technologies. AI assists in agriculture, healthcare, education, manufacturing, transportation, media, customer service, entertainment, law enforcement, climate change mitigation, 2 increasing sustainability, 3 and even in finding one's true love. 4 AI technologies, driven by big data, are fuelling a Fourth Industrial Revolution 5 -the revolution that is supposed to "fundamentally alter the way we live, work and relate," in scale, scope, and complexity "unlike anything humankind has experienced before." 6 The scale of the changes that AI will bring is indicated by forecasts that predict computers will replace humans in one-third of traditional jobs by 2025. 7 It is easy to realise that artificial intelligence has already become a part of our world when one googles, talks to (or maybe with) Alexa or Siri, drives a Tesla, or watches a robot gracefully cleaning one's apartment. AI, however, is also very much involved in less conspicuous, yet equally important, large-scale decision-making processes in all areas of our life, which sometimes entirely escape the scrutiny of our society. Only huge scandals, like Cambridge Analytica, 8 make societies more aware of the dangers posed by the unrestricted access to personal data that allows the very elaborate and precise profiling and targeting of every individual member of society. Simply enough, while AI offers better, more efficient and adequate solutions based on big data and deep learning, established in a process that is not impacted by the deficiencies of human nature, both the construction of algorithms as well as the way the results might be used raise concerns. In other words, it seems too good to be true, and, indeed, it is.

The Big Trio
Lawyers, as well as philosophers, ethicians, and sociologists, express their concerns about the far-reaching dangers that AI presents to almost every aspect of the modern world. While these dangers are identifiable on several levels, from the legal point of view, the following are perhaps the most threatening: the bias stemming from the inherited historical data, the problem of the lack of transparency (the "black-box society"), and the elimination of the human being from the decision-making process. First, AI is able to make decisions based solely on supplied data in a way that places it far ahead of humans doing the same job. In some areas, such as medical diagnostics or legal due diligence, if the algorithm is supplied with correct data that just refers to facts, one can only applaud the results, in terms of both speed and accuracy. AI systems offer time and cost savings up to 90 percent in contract review (LawGeex) 9 . It is claimed that COIN (JP Morgan's Contract Intelligence) needs only a few minutes to perform tasks equal to a hundred hours of human work. 10 Also, in the area of medical diagnostics, AI begins to exhibit accuracy that matches or exceeds the accuracy of the human-made diagnosis. 11 In other areas, however, where the input data required for the algorithms includes data historically contaminated with bias (based on human decision-making), the algorithm can replicate the bias, or even reinforce it. 12 The bias can, of course, originate elsewhere-perhaps a false logic or even the prejudice of the programmers. 13 As an example, consider an article's title: "Women Must Act Now, or Male-Designer Robots Will Take Over Our Lives." 14 The bias can likewise come from statistical bias in the management of data, translating into outdated data, selection bias, sampling bias, misleading statistics, or collection or modelling errors. 15 The best-known examples of historically biased data that might result in the discriminatory effect of the algorithm use include utilisation of the AI technology in the judiciary and crime prevention sector. Claims of a racial bias were voiced against COMPAS (Correctional Offender Management Profiling for Alternative Sanctions), an algorithm used in the United States to guide sentencing by predicting the likelihood of a criminal re-offending. 16 According to estimations by ProPublica, while COMPAS predicted recidivism rates correctly at 61 percent and violent recidivism at 20 percent, it was "particularly likely to falsely flag black defendants as future criminals, wrongly labelling them this way at almost twice the rate as white defendants". 17 Similarly, PredPol, 18 is an algorithm designed to predict where and when crimes will likely occur over the next 12 hours. The algorithm, which examines ten years of data-including the types of crimes and dates, locations, and times of offenses-was accused of being a predatory policing program that "inflicts harm on the city's (Los Angeles) poorest residents." 19 Likewise, AI can replicate the discrimination patterns against women. In relation to AI technology used by Amazon, 20 the problem referred to the fact that the algorithms learned by observing patterns in resumes submitted to the company over a ten-year period. Most of the applications came from men, reflecting the male domination in the tech industry. Subsequently, AI taught itself to prefer the male candidates, penalised resumes that included the word "women" ("women's chess club captain"), and downgraded graduates of all-women colleges.
The concerns also are growing when it comes to facial recognition systems, a potential source of race and gender bias alike. 21 While this is not a new problem (Hewlett-Packard faced it in 2009 22 ), recent tests of three general-purpose facial-analysis systems, which could be used to match faces in different photos as well as to assess characteristics such as gender, age, and mood, i.e. Microsoft, IBM and Megvii performed by MIT and Stanford University revealed striking differences in the accuracy of the results. The systems operated in an almost perfect way for light-skinned males (0.8 percent error rate), while for dark-skinned women, the error rate was 34.7 percent, and for women with the darkest shade of skin, it was 46.8 percent-almost like guessing at random. 23 Considering that face recognition systems are increasingly used in law enforcement, for example, for identification of suspects in a crowd, biases in face recognition may lead to ingraining biases in police stop-and-search procedures. 24 Second, the algorithm-based decision-making process is entirely hidden from the reach of the end users of the decision. This is irrelevant (at least apparently irrelevant) in many situations, but deeply troubling in others, creating distrust toward nontransparent outcomes, creating the so-called "black-box" paradox. The algorithms are developed as mathematical techniques, and the human intervention is limited to designing the initial scheme. That means that the algorithms can be challenging to understand not only for the end users (or addresses) of the particular AI technology, but even for the developers of the algorithm. To give an example, an AI system called Deep Patient applied deep learning to a hospital's database of patient records. While the AI proved to be proficient at predicting diseases (it discovered patterns hidden in the hospital data), its developers had no idea how Deep Patient learned to do it. 25 Deep-learning machines have the ability to reprogram themselves, which creates a situation where the programmers lose the capacity to understand the logic behind the AI decisions. 26 For the AI end users, the situation is even more challenging, as all that there is to be seen is the input (if it is even possible to realise that the output is being collected) and the output. The working of the system remains a mystery, and there is no way to understand how one becomes the other. 27 The increased use of AI means, therefore, that the ever-increasing spheres of human life are run by technologies that are challenging when it comes to knowing and understanding the reasons why and how a certain outcome was achieved-a black-box society. 28 The lack of such knowledge and understanding has various consequences. On the one hand, the AIachieved results might be difficult to accept by societies due to their lack of transparency and observable discriminatory traits. On the other hand, AI-made outcomes and decisions might be difficult to challenge due to their nature, for instance, the full automatization of the decision-making process. The full automatization of the decision-making process, which makes it entirely de-humanised, is the third area of concern. The question really is how to ensure that a fully automated decision-making process, run by computer programs, will respect human rights or, in an even wider context, human values, which are deeply embedded in human emotions and therefore extremely challenging when it comes to translating them into mathematical formulas.

Living in a World Run by Data . . .
Everything humans do nowadays is recorded. The tracing of our activities is not limited to online actions, as the automated surveillance systems are able to track us even when we are in what seems to be like an offline world. The great question of today is not really about the fact that the data is being collected, but to whom the data is being made available and for how long? As Frank Pasquale put it, 29 "tracked even more closely by firms and governments, we have no idea of just how much of this information can travel, how it is used, or its consequences." The need to ensure privacy in the world of full transparency-but only for the subjects of the analysis, not for the analysis process itself-becomes the task of essential importance. The unrestricted circulation of data leads to unprecedented results, infringing not only on fundamental human rights but even on a fundamental sense of decency. 21 https://www.newscientist.com/article/2166207-discriminating-algorithms-5-times-ai-showed-prejudice/. 22 https://www.pcmag.com/archive/hp-responds-to-claim-of-racist-webcams-247074. 23 https://news.mit.edu/2018/study-finds-gender-skin-type-bias-artificial-intelligence-systems-0212; paper with the research results: Jou Buolamwini  Perhaps the greatest danger is the cross-referencing of the pools of data, collected in various environments and for various purposes, for example, data stemming from medical examination being used to limit access to insurance (medical or otherwise), personal life data from social media outlets impacting the evaluation of creditworthiness, and so forth.

. . . and Run by Corporations
AI is the perfect tool to increase efficiency in the hands of the corporations that have access to large or even limitless amounts of data (mostly user-generated data). These transnational corporations have long ceased to act as solely economic players and have entered into the sphere of international relations. The reflection over the growing position of transnational corporations in international relations is not new. 30 As Babic, Fichtner, and Heemskerk 31 put it: the transnationalisation, or de-nationalisation, of production and finance has created new and growing opportunities for firms to shift production, participate in complex global value chains that are difficult to regulate, and circumvent state attempts to regulate and tax corporate activities. This has led to big business developing a profound structural power position on the global scale. 32 It is clear that, at the moment, the state power is limited and constrained vis-à-vis the mobility and agility of transnational capital that is detached from the old world of nation states. 33 The publicised clashes between the tech giants, in relation to developing and utilising AI technology, clearly prove this point. Recently 34 Apple announced the introduction of the upcoming iOS 14 privacy measures. They will require users to grant permission for their activities to be tracked for personalised advertising purposes. 35 On the app's product page in the App Store, the users will be able to learn about "some of the data types the app may collect, and whether that data is linked to them or used to track them." In order to submit new apps and app updates to the App Store, developers will have to provide information about their app's privacy practices, including the practices of third-party partners whose code is integrated in the app. 36 This triggered a public reaction on Facebook, which on 16 and 17 December 2020 ran a full-page ad in the Wall Street Journal, the New York Times, and the Washington Post in relation to Apple's policy. According to Facebook, Apple's new policy will harm small businesses and the Internet as a whole alike. Many apps and websites will have to start charging subscription fees or add more in-app purchase options, making the Internet much more expensive, because Apple's change will limit the ability of running personalised ads .37 Facebook claims that Apple's move "is not about privacy, it's about profit," 38 and that this is a part of Apple's strategy to expand their fees and services business. 39 Strikingly, Facebook accuses Apple of making "far-reaching changes without input from the industry and the businesses most impacted" in an effort to push businesses and developers into a business model that benefits the company's bottom line. Facebook also announced that it would provide information for an antitrust suit filed against Apple by Epic Games, to let the court understand "the unfair policies that Apple imposes." Apple's response stresses that the company's policy is standing up for its users who "should know when their data is being collected and shared across other apps and websites." According to Apple, Facebook does not have to stop tracking users or creating targeted advertising, but it must simply give their users a choice. This debate says everything about the modern data-run world. Facebook, with over 2.7 billion monthly active users (as of second-quarter 2020) 40 and Apple Inc., the world's first company to record a market capitalisation of $1 trillion, which passed the $1.3 trillion threshold in December 2019, and surpassed annual GDP of 82.1 percent of the 263 countries for which the World Bank compiles GDP, 41 argue over privacy issues for the entire world, while the word "legislator" is not to be found in the background. The regulator became the private regulator, and even the most noble explanation of the decision-making process lacks entirely any sort of democratic accountability.

The European Union Takes a Stand (and Rightly So)
The European Union is no stranger to the dangers posed by the unrestricted collection of data, invasion of privacy, and the threats of artificial intelligence. The European Union's position in this respect is a curious one. Considering that the market leaders in the high tech are located outside the European Union, the EU finds itself in a weaker position regarding data access, 42 but at the same time the EU-imposed rules impact the functioning of non-EU companies, creating protective standards for EU citizens and companies. However, the European Union has not given up working toward a leader position in the data-run world. It claims that some substantial shifts in the value and re-use of data across sectors are incoming, and the volume of data produced worldwide continues to increase and flood the market in so-called data waves. 43 The European Union still declares that it is feasible for it to become a market leader in the technology area, but this can only happen if the EU will be ready to face those new data waves with new superior tools in AI and machine learning. 44 Artificial intelligence is high on the European Union's political agenda. Commission President Ursula von der Leyen announced in her political Guidelines a coordinated European approach on the human and ethical implications of AI as well as a reflection on the better use of big data for innovation. 45 This clearly is a political issue; the stance the European Union takes serves to strengthen and support the EU's negotiating position on a global level in the context of the digital economy. 46 The EU optics was set already in the General Data Protection Regulation, adopted in 2016. 47 There, the EU took notice of algorithm-based decision-making on the fundamental rights and freedoms of natural persons. Motive 71 of the GDPR preamble takes a clear position in this regard, declaring that the data subject should have the right not to be subject to a decision, which may include a measure, evaluating personal aspects relating to him or her which is based solely on automated processing and which produces legal effects concerning him or her or similarly significantly affects him or her, such as automatic refusal of an online credit application or e-recruiting practices without any human intervention. It continues that such processing includes "profiling" that consists of any form of automated processing of personal data evaluating the personal aspects relating to a natural person, in particular, analysing or predicting aspects concerning the data subject's performance at work, economic situation, health, personal preferences or interests, reliability or behaviour, location or movements, where it produces legal effects concerning the subject or similarly significantly affects the subject. GDPR Article 15 gives data subjects the right to obtain confirmation as to whether or not personal data concerning the subject are being processed, as well as access to personal data and specific information (that includes, among others, the purpose of the processing or the categories of personal data). For the European Union, the success of the GDPR is a source of pride and entitlement, which places the EU as the ethical standard-setter for the data-run modern world. The European Union understands very clearly that it must "act as one and define its own way, based on European values" to promote the development and deployment of AI, 48 as it follows from the intensive works undertaken in the area. In April 2018 the European Commission presented the European Strategy for AI, 49 promoting a pan-European approach. 50 Coordinated Plan 51 prepared in cooperation with the Member States runs until 2027 and mandates joint actions with a view to ensure efficient cooperation between the Member States and the Commission, based on extensive EU-level funding. 52 The European Union pursues AI progress on several levels simultaneously. The European Parliament has already adopted reports and resolutions dealing with specific aspects of AI use, 53 stressing the need to adopt human-centric technology that is ethical by design, limits the autonomy of AI and robotics, ensures transparency and explainability of algorithms, and eliminates bias of any kind. The European Commission takes notice of AI in consumer-oriented actions, 54 but also carries out extensive AI-specific programs. In 2018, the Commission announced a Communication on Artificial Intelligence for Europe, 55 which declares that AI-enabled tools in business-to-consumer transactions must be fair, transparent, and compliant with consumer legislation and individuals should be able to control the data generated by using these tools and should know whether they are communicating with a machine or another human. The Commission focuses on three pillars: increasing public and private investments, preparing for socioeconomic changes brought about by AI on the labour market and in the field of education, and ensuring an appropriate ethical and legal framework. In June 2018, the Commission established a High Level Group on Artificial Intelligence that gathers representatives from academia, civil society, and industry to support the implementation of the European strategy on artificial intelligence, focussing on the ethical aspects of AI. 56 The guidelines prepared by the experts 57 reconfirmed the Commissions' approach 58 that sees the human as the centre of the development of AI (known as the "human-centric approach"). 59 In February 2020, the Commission published a White Paper 60 in which it established that the approach to AI ought to be regulatory and investmentoriented while considering the aim to promote the development of AI and the risks of this technology. 61 It recognises that ensuring human and ethical implication of AI is key to promoting its trustworthiness. 62 The Commission clearly reinforces the human-centric approach, and the need to respect the fundamental rights of every citizen: the human dignity and the protection of private data, thriving to establish an "AI ecosystem of excellence and trust." 63 It stresses that the European approach is necessary and urgent in order to avoid the fragmentation of the internal market caused by national initiatives to regulate AI technology. 64 To close (otherwise rather gloomy) year 2020 successfully, the EU Commission announced several important documents and initiatives. In November, the Commission published Final Report of the Public Consultation on the AI White Paper. 65 On 15 December, at a time of day convenient for the U.S. Silicon Valley to attend live, the Commission showed the world important parts of its Digital Strategy: 66 the Digital Services Act 67 and Digital Markets Act, 68 which complement the European Democracy Action Plan. 69 These Acts concentrate on creating a safer digital space in which the fundamental rights of all users of digital services are protected and on establishing a level playing field to foster innovation, growth, and competitiveness, both in the European Union and globally, and, as such, they will have a direct impact on the giant tech companies, for example, by requiring more transparency on advertising targeting. Welcoming the regulation, Facebook has already complained that the EU regulators have not aimed at Apple sufficiently. 70 It is noteworthy to mention that, around the same time, the European Agency for Fundamental Rights published a report titled "Getting the Future Right. Artificial Intelligence and Fundamental Rights." 71 What Does the Future Hold?
The AI-run world is probably inevitable. The question that requires an urgent answer is how to construct the rules for artificial intelligence and where (and how) to set the limits for AI. The development of AI is taking place in an unprecedented speed and in a way that is sometimes impossible to comprehend. AI can teach itself tasks that quite recently were thought to require unique intelligence (or deceptive abilities) of humans. 72 In somehow haunting premonition, Professor Stephen Hawking warned that "the development of full artificial intelligence could spell the end of the human race." 73 He, Elon Musk, and over 1,000 AI and robotic researchers signed a letter that suggested a ban on AI warfare, insisting that the trigger should always remain in the hands of a human controller. 74 This approach echoes the position of the European Union: AI with a human being at the centre with built-in human dignity and fundamental rights safeguards, even if that inevitably leads to limiting the AI operation. Those limits and safeguards, however, constitute safe anchors for democracy, and should not be abandoned for the sake of efficiency, even in the extraordinary circumstances like pandemics. 75