Artificial Intelligence and Legal Identity

This article focuses on the issue of granting the status of a legal subject to artificial intelligence (AI), especially based on civil law. Legal identity is defined here as a concept integral to the term of legal capacity; however, this does not imply accepting that moral subjectivity is the same as moral personality. Legal identity is a complex attribute that can be recognized for certain subjects or assigned to others.

I believe this attribute is graded, discrete, discontinuous, multifaceted, and changeable. This means that it can contain more or less elements of different types (e.g., duties, rights, competencies, etc.), which in most cases can be added or removed by the legislator; human rights, which, according to the common opinion, cannot be deprived, are the exception.

Nowadays, humanity is facing a period of social transformation related to the replacement of one technological mode with another; “smart” machines and software learn quite quickly; artificial intelligence systems are increasingly capable of replacing people in many activities. One of the issues that is arising more and more frequently due to the improvement of artificial intelligence technologies is the recognition of artificial intelligent systems as legal subjects, as they have reached the level of making fully autonomous decisions and potentially manifesting “subjective will”. This issue was hypothetically raised in the 20th century. In the 21st century, the scientific debate is steadily evolving, reaching the other extreme with each introduction of new models of artificial intelligence into practice, such as the appearance of self-driving cars on the streets or the presentation of robots with a new set of functions.

The legal issue of determining the status of artificial intelligence is of a general theoretical nature, which is caused by the objective impossibility of predicting all possible outcomes of developing new models of artificial intelligence. However, artificial intelligence systems (AI systems) are already actual participants in certain social relations, which requires the establishment of “benchmarks”, i.e., resolution of fundamental issues in this area for the purpose of legislative consolidation, and thus, reduction of uncertainty in predicting the development of relations involving artificial intelligence systems in the future.

The issue of the alleged identity of artificial intelligence as an object of research, mentioned in the title of the article, certainly does not cover all artificial intelligence systems, including many “electronic assistants” that do not claim to be legal entities. Their set of functions is limited, and they represent narrow (weak) artificial intelligence. We will rather refer to “smart machines” (cyber-physical intelligent systems) and generative models of virtual intelligent systems, which are increasingly approaching general (powerful) artificial intelligence comparable to human intelligence and, in the future, even exceeding it.

By 2023, the issue of creating strong artificial intelligence has been urgently raised by multimodal neural networks such as ChatGPT, DALL-e, and others, the intellectual capabilities of which are being improved by increasing the number of parameters (perception modalities, including those inaccessible to humans), as well as by using large amounts of data for training that humans cannot physically process. For example, multimodal generative models of neural networks can produce such images, literary and scientific texts that it is not always possible to distinguish whether they are created by a human or an artificial intelligence system.

IT experts highlight two qualitative leaps: a speed leap (the frequency of the emergence of brand-new models), which is now measured in months rather than years, and a volatility leap (the inability to accurately predict what might happen in the field of artificial intelligence even by the end of the year). The ChatGPT-3 model (the third generation of the natural language processing algorithm from OpenAI) was introduced in 2020 and could process text, while the next generation model, ChatGPT-4, launched by the manufacturer in March 2023, can “work” not only with texts but also with images, and the next generation model is learning and will be capable of even more.

A few years ago, the anticipated moment of technological singularity, when the development of machines becomes virtually uncontrollable and irreversible, dramatically changing human civilization, was considered to occur at least in a few decades, but nowadays more and more researchers believe that it can happen much faster. This implies the emergence of so-called strong artificial intelligence, which will demonstrate abilities comparable to human intelligence and will be able to solve a similar or even wider range of tasks. Unlike weak artificial intelligence, strong AI will have consciousness, yet one of the essential conditions for the emergence of consciousness in intelligent systems is the ability to perform multimodal behavior, integrating data from different sensory modalities (text, image, video, sound, etc.), “connecting” information of different modalities to reality, and creating complete holistic “world metaphors” inherent in humans.

In March 2023, more than a thousand researchers, IT experts, and entrepreneurs in the field of artificial intelligence signed an open letter published on the website of the Future of Life Institute, an American research center specializing in the investigation of existential risks to humanity. The letter calls for suspending the training of new generative multimodal neural network models, as the lack of unified security protocols and legal vacuum significantly enhance the risks as the speed of AI development has increased dramatically due to the “ChatGPT revolution”. It was also noted that artificial intelligence models have developed unexplained capabilities not intended by their developers, and the share of such capabilities is likely to gradually increase. In addition, such a technological revolution dramatically boosts the creation of intelligent gadgets that will become widespread, and new generations, modern children who have grown up in constant communication with artificial intelligence assistants, will be very different from previous generations.

Is it possible to hinder the development of artificial intelligence so that humanity can adapt to new conditions? In theory, it is, if all states facilitate this through national legislation. Will they do so? Based on the published national strategies, they won’t; on the contrary, each state aims to win the competition (to maintain leadership or to narrow the gap).

The capabilities of artificial intelligence attract entrepreneurs, so businesses invest heavily in new developments, with the success of each new model driving the process. Annual investments are growing, considering both private and state investments in development; the global market for AI solutions is estimated at hundreds of billions of dollars. According to forecasts, in particular those contained in the European Parliament’s resolution “On Artificial Intelligence in the Digital Age” dated May 3, 2022, the contribution of artificial intelligence to the global economy will exceed 11 trillion euros by 2030.

Practice-oriented business leads to the implementation of artificial intelligence technologies in all sectors of the economy. Artificial intelligence is used in both the extractive and processing industries (metallurgy, fuel and chemical industry, engineering, metalworking, etc.). It is applied to predict the efficiency of developed products, automate assembly lines, reduce rejects, improve logistics, and prevent downtime.

The use of artificial intelligence in transportation involves both autonomous vehicles and route optimization by predicting traffic flows, as well as ensuring safety through the prevention of dangerous situations. The admission of self-driving cars to public roads is an issue of intense debate in parliaments around the world.

In banking, artificial intelligence systems have almost completely replaced humans in assessing borrowers’ creditworthiness; they are increasingly being used to develop new banking products and enhance the security of banking transactions.

Artificial intelligence technologies are taking over not only business but also the social sphere: healthcare, education, and employment. The application of artificial intelligence in medicine enables better diagnostics, development of new medicines, and robotics-assisted surgeries; in education, it allows for personalized lessons, automated assessment of students and teachers’ expertise.

Today, employment is increasingly changing due to the exponential growth of platform employment. According to the International Labour Organization, the share of people working through digital employment platforms augmented by artificial intelligence is steadily increasing worldwide. Platform employment is not the only component of the labor transformation; the growing level of production robotization also has a significant impact. According to the International Federation of Robotics, the number of industrial robots continues to increase worldwide, with the fastest pace of robotization observed in Asia, primarily in China and Japan.

Indeed, the capabilities of artificial intelligence to analyze data used for production management, diagnostic analytics, and forecasting are of great interest to governments. Artificial intelligence is being implemented in public administration. Nowadays, the efforts to create digital platforms for public services and automate many processes related to decision-making by government agencies are being intensified.

The concepts of “artificial personality” and “artificial sociality” are more frequently mentioned in public discourse; this demonstrates that the development and implementation of intelligent systems have shifted from a purely technical field to the research of various means of its integration into humanitarian and socio-cultural activities.

In view of the above, it can be stated that artificial intelligence is becoming more and more deeply embedded in people’s lives. The presence of artificial intelligence systems in our lives will become more evident in the coming years; it will increase both in the work environment and in public space, in services and at home. Artificial intelligence will increasingly provide more efficient results through intelligent automation of various processes, thus creating new opportunities and posing new threats to individuals, communities, and states.

As the intellectual level grows, AI systems will inevitably become an integral part of society; people will have to coexist with them. Such a symbiosis will involve cooperation between humans and “smart” machines, which, according to Nobel Prize-winning economist J. Stiglitz, will lead to the transformation of civilization (Stiglitz, 2017). Even today, according to some lawyers, “in order to enhance human welfare, the law should not distinguish between the activities of humans and those of artificial intelligence when humans and artificial intelligence perform the same tasks” (Abbott, 2020). It should also be considered that the development of humanoid robots, which are acquiring physiology more and more similar to that of humans, will lead, among other things, to their performing gender roles as partners in society (Karnouskos, 2022).

States must adapt their legislation to changing social relations: the number of laws aimed at regulating relations involving artificial intelligence systems is growing rapidly around the world. According to Stanford University’s AI Index Report 2023, while only one law was adopted in 2016, there were 12 of them in 2018, 18 – in 2021, and 37 – in 2022. This prompted the United Nations to define a position on the ethics of using artificial intelligence at the global level. In September 2022, a document was published that contained the principles of ethical use of artificial intelligence and was based on the Recommendations on the Ethics of Artificial Intelligence adopted a year earlier by the UNESCO General Conference. However, the pace of development and implementation of artificial intelligence technologies is far ahead of the pace of relevant changes in legislation.

Basic Concepts of Legal Capacity of Artificial Intelligence

Considering the concepts of potentially granting legal capacity to intellectual systems, it should be acknowledged that the implementation of any of these approaches will require a fundamental reconstruction of the existing general theory of law and amendments to a number of provisions in certain branches of law. It should be emphasised that proponents of different views often use the term “electronic person”, thus, the use of this term does not allow to determine which concept the author of the work is a proponent of without reading the work itself.

The most radical and, obviously, the least popular approach in scientific circles is the concept of the individual legal capacity of artificial intelligence. Proponents of this approach put forward the idea of “full inclusivity” (extreme inclusivism), which implies granting AI systems a legal status similar to that of humans as well as recognizing their own interests (Mulgan, 2019), given their social significance or social content (social valence). The latter is caused by the fact that “the robot’s physical embodiment tends to make humans treat this moving object as if it were alive. This is even more evident when the robot has anthropomorphic characteristics, as the resemblance to the human body makes people start projecting emotions, feelings of pleasure, pain, and care, as well as the desire to establish relationships” (Avila Negri, 2021). The projection of human emotions onto inanimate objects is not new, dating back to human history, but when applied to robots, it entails numerous implications (Balkin, 2015).

The prerequisites for legal confirmation of this position are usually mentioned as follows:

– AI systems are reaching a level comparable to human cognitive functions;

– increasing the degree of similarity between robots and humans;

– humanity, protection of intelligent systems from potential “suffering”.

As the list of mandatory requirements shows, all of them have a high degree of theorization and subjective assessment. In particular, the trend towards the creation of anthropomorphic robots (androids) is driven by the day-to-day psychological and social needs of people who feel comfortable in the “company” of subjects similar to them. Some modern robots have other constricting properties due to the functions they perform; these include “reusable” courier robots, which place a priority on robust construction and efficient weight distribution. In this case, the last of these prerequisites comes into play, due to the formation of emotional ties with robots in the human mind, similar to the emotional ties between a pet and its owner (Grin, 2018).

The idea of “full inclusion” of the legal status of AI systems and humans is reflected in the works of some legal scholars. Since the provisions of the Constitution and sectoral legislation do not contain a legal definition of a personality, the concept of “personality” in the constitutional and legal sense theoretically allows for an expansive interpretation. In this case, individuals would include any holders of intelligence whose cognitive abilities are recognized as sufficiently developed. According to A.V. Nechkin, the logic of this approach is that the essential difference between humans and other living beings is their unique highly developed intelligence (Nechkin, 2020). Recognition of the rights of artificial intelligence systems seems to be the next step in the evolution of the legal system, which is gradually extending legal recognition to previously discriminated against people, and today also provides access to non-humans (Hellers, 2021).

If AI systems are granted such a legal status, the proponents of this approach consider it appropriate to grant such systems not literal rights of citizens in their established constitutional and legal interpretation, but their analogs and certain civil rights with some deviations. This position is based on objective biological differences between humans and robots. For instance, it makes no sense to recognize the right to life for an AI system, since it does not live in the biological sense. The rights, freedoms, and obligations of artificial intelligence systems should be secondary when compared to the rights of citizens; this provision establishes the derivative nature of artificial intelligence as a human creation in the legal sense.

Potential constitutional rights and freedoms of artificial intelligent systems include the right to be free, the right to self-improvement (learning and self-learning), the right to privacy (protection of software from arbitrary interference by third parties), freedom of speech, freedom of creativity, recognition of AI system copyright and limited property rights. Specific rights of artificial intelligence can also be listed, such as the right to access a source of electricity.

As for the duties of artificial intelligence systems, it is suggested that the three well-known laws of robotics formulated by I. Asimov should be constitutionally consolidated: Doing no harm to a person and preventing harm by their own inaction; obeying all orders given by a person, except for those aimed at harming another person; taking care of their own safety, except for the two previous cases (Naumov and Arkhipov, 2017). In this case, the rules of civil and administrative law will reflect some other duties.

The concept of the individual legal capacity of artificial intelligence has very little chance of being legitimized for several reasons.

First, the criterion for recognizing legal capacity based on the presence of consciousness and self-awareness is abstract; it allows for numerous offences, abuse of law and provokes social and political problems as an additional reason for the stratification of society. This idea was developed in detail in the work of S. Chopra and L. White, who argued that consciousness and self-awareness are not necessary and/or sufficient condition for recognising AI systems as a legal subject. In legal reality, completely conscious individuals, for example, children (or slaves in Roman law), are deprived or limited in legal capacity. At the same time, persons with severe mental disorders, including those declared incapacitated or in a coma, etc., with an objective inability to be conscious in the first case remain legal subjects (albeit in a limited form), and in the second case, they have the same full legal capacity, without major changes in their legal status. The potential consolidation of the mentioned criterion of consciousness and self-awareness will make it possible to arbitrarily deprive citizens of legal capacity.

Secondly, artificial intelligence systems will not be able to exercise their rights and obligations in the established legal sense, since they operate based on a previously written program, and legally significant decisions should be based on a person’s subjective, moral choice (Morhat, 2018b), their direct expression of will. All moral attitudes, feelings, and desires of such a “person” become derived from human intelligence (Uzhov, 2017). The autonomy of artificial intelligence systems in the sense of their ability to make decisions and implement them independently, without external anthropogenic control or targeted human influence (Musina, 2023), is not comprehensive. Nowadays, artificial intelligence is only capable of making “quasi-autonomous decisions” that are somehow based on the ideas and moral attitudes of people. In this regard, only the “action-operation” of an AI system can be considered, excluding the ability to make a real moral assessment of artificial intelligence behavior (Petiev, 2022).

Thirdly, the recognition of the individual legal capacity of artificial intelligence (especially in the form of equating it with the status of a natural person) leads to a destructive change in the established legal order and legal traditions that have been formed since the Roman law and raises a number of fundamentally insoluble philosophical and legal issues in the field of human rights. The law as a system of social norms and a social phenomenon was created with due regard to human capabilities and to ensure human interests. The established anthropocentric system of normative provisions, the international consensus on the concept of internal rights will be considered legally and factually invalid in case of establishing an approach of “extreme inclusivism” (Dremlyuga & Dremlyuga, 2019). Therefore, granting the status of a legal entity to AI systems, in particular “smart” robots, may not be a solution to existing problems, but a Pandora’s box that aggravates social and political contradictions (Solaiman, 2017).

Another point is that the works of the proponents of this concept usually mention only robots, i.e. cyber-physical artificial intelligence systems that will interact with people in the physical world, while virtual systems are excluded, although strong artificial intelligence, if it emerges, will be embodied in a virtual form as well.

Based on the above arguments, the concept of individual legal capacity of an artificial intelligence system should be considered as legally impossible under the current legal order.

The concept of collective personality with regard to artificial intelligent systems has gained considerable support among proponents of the admissibility of such legal capacity. The main advantage of this approach is that it excludes abstract concepts and value judgments (consciousness, self-awareness, rationality, morality, etc.) from legal work. The approach is based on the application of legal fiction to artificial intelligence.

As for legal entities, there are already “advanced regulatory methods that can be adapted to solve the dilemma of the legal status of artificial intelligence” (Hárs, 2022).

This concept does not imply that AI systems are actually granted the legal capacity of a natural person but is only an extension of the existing institution of legal entities, which suggests that a new category of legal entities called cybernetic “electronic organisms” should be created. This approach makes it more appropriate to consider a legal entity not in accordance with the modern narrow concept, in particular, the obligation that it may acquire and exercise civil rights, bear civil liabilities, and be a plaintiff and defendant in court on its own behalf), but in a broader sense, which represents a legal entity as any structure other than a natural person endowed with rights and obligations in the form provided by law. Thus, proponents of this approach suggest considering a legal entity as a subject entity (ideal entity) under Roman law.

The similarity between artificial intelligence systems and legal entities is manifested in the way they are endowed with legal capacity – through mandatory state registration of legal entities. Only after passing the established registration procedure a legal entity is endowed with legal status and legal capacity, i.e., it becomes a legal subject. This model keeps discussions about the legal capacity of AI systems in the legal field, excluding the recognition of legal capacity on other (extra-legal) grounds, without internal prerequisites, while a person is recognized as a legal subject by birth.

The advantage of this concept is the extension to artificial intelligent systems of the requirement to enter information into the relevant state registers, similar to the state register of legal entities, as a prerequisite for granting them legal capacity. This method implements an important function of systematizing all legal entities and creating a single database, which is necessary for both state authorities to control and supervise (for example, in the field of taxation) and potential counterparties of such entities.

The scope of rights of legal entities in any jurisdiction is usually less than that of natural persons; therefore, the use of this structure to grant legal capacity to artificial intelligence is not associated with granting it a number of rights proposed by the proponents of the previous concept.

When applying the legal fiction technique to legal entities, it is assumed that the actions of a legal entity are accompanied by an association of natural persons who form their “will” and exercise their “will” through the governing bodies of the legal entity.

In other words, legal entities are artificial (abstract) units designed to satisfy the interests of natural persons who acted as their founders or controlled them. Likewise, artificial intelligent systems are created to meet the needs of certain individuals – developers, operators, owners. A natural person who uses or programs AI systems is guided by his or her own interests, which this system represents in the external environment.

Assessing such a regulatory model in theory, one should not forget that a complete analogy between the positions of legal entities and AI systems is impossible. As mentioned above, all legally significant actions of legal entities are accompanied by natural persons who directly make these decisions. The will of a legal entity is always determined and fully controlled by the will of natural persons. Thus, legal entities cannot operate without the will of natural persons. As for AI systems, there is already an objective problem of their autonomy, i.e. the ability to make decisions without the intervention of a natural person after the moment of the direct creation of such a system.

Given the inherent limitations of the concepts reviewed above, a large number of researchers offer their own approaches to addressing the legal status of artificial intelligent systems. Conventionally, they can be attributed to different variations of the concept of “gradient legal capacity”, according to the researcher from the University of Leuven D. M. Mocanu, who implies a limited or partial legal status and legal capability of AI systems with a reservation: the term “gradient” is used because it is not only about including or not including certain rights and obligations in the legal status, but also about forming a set of such rights and obligations with a minimum threshold, as well as about recognizing such legal capacity only for certain purposes. Then, the two main types of this concept may include approaches that justify:

1) granting AI systems a special legal status and including “electronic persons” in the legal order as an entirely new category of legal subjects;

2) granting AI systems a limited legal status and legal capability within the framework of civil legal relations through the introduction of the category of “electronic agents”.

The position of proponents of different approaches within this concept can be united, given that there are no ontological grounds to consider artificial intelligence as a legal subject; however, in specific cases, there are already functional reasons to endow artificial intelligence systems with certain rights and obligations, which “proves the best way to promote the individual and public interests that should be protected by law” by granting these systems “limited and narrow” forms of legal entity”.

Granting special legal status to artificial intelligence systems by establishing a separate legal institution of “electronic persons” has a significant advantage in the detailed explanation and regulation of the relations that arise:

– between legal entities and natural persons and AI systems;

– between AI systems and their developers (operators, owners);

– between a third party and AI systems in civil legal relations.

In this legal framework, the artificial intelligence system will be controlled and managed separately from its developer, owner or operator. When defining the concept of the “electronic person”, P. M. Morkhat focuses on the application of the above-mentioned method of legal fiction and the functional direction of a particular model of artificial intelligence: “electronic person” is a technical and legal image (which has some features of legal fiction as well as of a legal entity) that reflects and implements a conditionally specific legal capacity of an artificial intelligence system, which differs depending on its intended function or purpose and capabilities.

Similarly to the concept of collective persons in relation to AI systems, this approach involves keeping special registers of “electronic persons”. A detailed and clear description of the rights and obligations of “electronic persons” is the basis for further control by the state and the owner of such AI systems. A clearly defined range of powers, a narrowed scope of legal status, and the legal capability of “electronic persons” will ensure that this “person” does not go beyond its program due to potentially independent decision-making and constant self-learning.

This approach implies that artificial intelligence, which at the stage of its creation is the intellectual property of software developers, may be granted the rights of a legal entity after appropriate certification and state registration, but the legal status and legal capability of an “electronic person” will be preserved.

The implementation of a fundamentally new institution of the established legal order will have serious legal consequences, requiring a comprehensive legislative reform at least in the areas of constitutional and civil law. Researchers reasonably point out that caution should be exercised when adopting the concept of an “electronic person”, given the difficulties of introducing new persons in legislation, as the expansion of the concept of “person” in the legal sense may potentially result in restrictions on the rights and legitimate interests of existing subjects of legal relations (Bryson et al., 2017). It seems impossible to consider these aspects since the legal capacity of natural persons, legal entities and public law entities is the result of centuries of evolution of the theory of state and law.

The second approach within the concept of gradient legal capacity is the legal concept of “electronic agents”, primarily related to the widespread use of AI systems as a means of communication between counterparties and as tools for online commerce. This approach can be called a compromise, as it admits the impossibility of granting the status of full-fledged legal subjects to AI systems while establishing certain (socially significant) rights and obligations for artificial intelligence. In other words, the concept of “electronic agents” legalizes the quasi-subjectivity of artificial intelligence. The term “quasi-legal subject” should be understood as a certain legal phenomenon in which certain elements of legal capacity are recognized at the official or doctrinal level, but the establishment of the status of a full-fledged legal subject is impossible.

Proponents of this approach emphasize the functional features of AI systems that allow them to act as both a passive tool and an active participant in legal relations, potentially capable of independently generating legally significant contracts for the system owner. Therefore, AI systems can be conditionally considered within the framework of agency relations. When creating (or registering) an AI system, the initiator of the “electronic agent” activity enters into a virtual unilateral agency agreement with it, as a result of which the “electronic agent” is granted a number of powers, exercising which it can perform legal actions that are significant for the principal.

Sources:

  • R. McLay, “Managing the rise of Artificial Intelligence,” 2018
  • Bertolini A. and Episcopo F., 2022, “Robots and AI as Legal Subjects? Disentangling the Ontological and Functional Perspective”
  • Alekseev, A. Yu., Alekseeva, E. A., Emelyanova, N. N. (2023). “Artificial personality in social and political communication. Artificial societies”
  • “Specificities of Sanfilippo A syndrome laboratory diagnostics” N.S. Trofimova, N.V. Olkhovich, N.G. Gorovenko
  • Shutkin, S. I., 2020, “Is the Legal Capacity of Artificial Intelligence Possible? Works on Intellectual Property”
  • Ladenkov, N. Ye., 2021, “Models of granting legal capacity to artificial intelligence”
  • Bertolini, A., and Episcopo, F., 2021, “The Expert Group’s Report on Liability for Artificial Intelligence and Other Emerging Digital Technologies: a Critical Assessment”
  • Morkhat, P. M., 2018, “On the question of the legal definition of the term artificial intelligence”

Credit: Source link

Comments are closed.