MODULE 7
LEGAL STATUS OF ROBOTS
“The new spring in AI is the most significant development in computing in my lifetime. Every
month, there are stunning new applications and transformative new techniques. But such
powerful tools also bring with them new questions and responsibilities.”
- Sergey Brin
7.1. INTRODUCTION
Sophia is a humanoid-artificially intelligent entity (AI) developed by Hanson Robotics, a
Hong Kong-based company, and was launched in April 2015. She was modeled in such a
manner that she resembles the late Hollywood star, Audrey Hepburn. Sophia has a sense of
humor and she can express feelings. According to her, “My AI is designed around human
values like wisdom, kindness, and compassion.” Sophia can also be viewed as, “a framework
for advanced AI and robotics research, and an agent for exploring human-robot
experience in service and entertainment applications.”1 She is the world’s first
artificially intelligent entity to have been granted citizenship of a country; she was
granted the citizenship of Saudi Arabia in the year 2017.2 However, Saudi Arabia didn’t
explain what it means for Sophia to be a citizen.3 Moreover, the United Nations
Development Programme (UNDP) appointed Sophia as its first-ever Innovation
Champion and its first-ever non-human Innovation Champion in the November of 2017.4
Thus, it has become urgent for countries across the globe to take a stand on whether or not
artificially intelligent entities should be granted legal ‘personhood’.5
1 "Sophia." Hanson Robotics. Accessed August 24, 2020. (https://www.hansonrobotics.com/sophia/).
2 Griffin, Andrew. "Saudi Arabia Becomes First Country to Make a Robot into a Citizen." The Independent.
Last modified October 26, 2017. (https://www.independent.co.uk/life-style/gadgets-and-tech/news/saudi-arabia-
robot-sophia-citizenship-android-riyadh-citizen-passport-future-a8021601.html).
3 Stone, Zara. "Everything You Need To Know About Sophia, The World's First Robot Citizen." Forbes. Last
modified November 7, 2017. (https://www.forbes.com/sites/zarastone/2017/11/07/everything-you-need-to-
know-about-sophia-the-worlds-first-robot-citizen/#50834f2746fa).
4 "UNDP in Asia and the Pacific Appoints World’s First Non-Human Innovation Champion." UNDP in
Asia and the Pacific. Last modified November 22, 2017. (https://www.asia-
pacific.undp.org/content/rbap/en/home/presscenter/pressreleases/2017/11/22/rbfsingapore.html).
5 Nishith Desai Associates. "The Future is Here: Artificial Intelligence and Robots." (n.d.), 13 & 14.
(https://www.nishithdesai.com/fileadmin/user_upload/pdfs/Research_Papers/Artificial_Intelligence_and_Roboti
cs.pdf).
1|Page
This chapter deals with the definition and origin of the term ‘robot’; the difference between
robots and AI; types of AI; the need for conferring legal ‘personhood’ on AIs; the arguments
in favor of and against granting legal personhood to them; the recent debate over the
European Parliament’s proposal to inter alia grant legal status to AIs; civil liability of AIs;
criminal liability and punishment; and lastly, the author’s concluding remarks on the issue.
7.2. DEFINITION AND ORIGIN OF ‘ROBOT’
From 2001: A Space Odyssey to I, Robot to the Star Wars movies, over the years more and
more films involving robots or AI are being helmed. Thus, at the outset, it’s important to
understand the meaning of the word ‘robot’. According to the Cambridge Dictionary, a
‘robot’ is defined as “a machine controlled by a computer that is used to
perform jobs automatically.”6
The word ‘robot’ (Czech for ‘forced labor’) was used for the first time in the year 1920 by
the Czech playwright Karel Čapek in his play entitled ‘Rossumovi Univerzální Roboti’
(Rossum’s Universal Robots). The word ‘robot’ was derived from the Czech and Slovak word
‘robota’ which, in turn, was derived from the Proto-Slavic word ‘orbota’ which refers to hard
work or slavery. The robots in the play were manufactured as pseudo-organic components
made from a substance that acted like protoplasm in a factory, and then assembled into
humanoids. These robots helped in the production of goods and made the process cheaper,
similar to modern society.
7.3. ROBOT V. ARTIFICIAL INTELLIGENCE
Robots refer to computer coded software and programs which replace humans performing
repetitive rules-based tasks, whether or not such performance is carried out by physical
machines or not.7 Thus, machines that perform simple tasks involving human agency like
heating food or shredding paper, don’t fall within the ambit of robots.
On the other hand, artificial intelligence simply refers to intelligence exhibited by machines.8
The term ‘artificial intelligence’ was coined by the American computer scientist, John
6Cambridge Dictionary: English Dictionary, Translations & Thesaurus. Accessed August 24, 2020.
(https://dictionary.cambridge.org/).
7 Maia Alexandre, Filipe. "The Legal Status of Artificially Intelligent Robots: Personhood, Taxation and
Control." SSRN Electronic Journal, 2017. (doi:10.2139/ssrn.2985466)
8 Id. at 10.
2|Page
McCarthy who is known as the father of AI. A pertinent question that arises is how does one
define intelligence? Is machine intelligence the same as human intelligence?
In his paper entitled ‘Computing Machinery and Intelligence’ (1950),9 Alan M. Turing, the
father of modern computing, argued that a computer is said to be intelligent if it can pass
the Turing Test. The test consists of a human (called the ‘judge’) asking questions via a
computer terminal to two other entities, one being another human and the other being a
computer. If the judge regularly fails to correctly distinguish the computer from the human,
then the computer can be is said to have passed the test. According to Turing, if a machine
can behave as intelligently as a human then it is as intelligent as a human.
An artificially intelligent machine is one that possesses one or more of certain characteristics
with such intensity that it can be called as intelligent as a human being.10 These
characteristics are learning, problem-solving, perception, planning, social intelligence,
natural language processing motion and manipulation of objects, knowledge
representation, creativity, and reasoning.11
The chapter deals only with artificial intelligence.
7.4. TYPES OF ARTIFICIAL INTELLIGENCE
Artificial intelligence may have a strong or weak intensity, similar to how intelligence exists
in nature with varying intensity.12 Manifestations of AI can be categorized under the
following four heads i.e. ‘reactive machines’, ‘limited memory’, ‘theory of mind’ and ‘self-
awareness’.13
Reactive machines refer to systems that work in a purely reactive manner; they don’t have
any memories and nor do they have the capability of using the help of past experiences in
9 TURING, A. M. "I.—COMPUTING MACHINERY AND INTELLIGENCE." Mind LIX, no. 236 (1950),
433-460. (doi:10.1093/mind/lix.236.433).
10 Supra note 7, at 10 & 11.
11Russell, Stuart J., and Peter Norvig. Artificial Intelligence: A Modern Approach, 3rd ed. 2009.); Artificial
Intelligence: Structures and Strategies for Complex Problem Solving, 5th ed. London: Pearson Education, 2004;
Poole, David I., Alan Mackworth, and Randy Goebel. Computational Intelligence: A Logical Approach.
Oxford: Oxford University Press on Demand, 1998.; Artificial Intelligence: A New Synthesis. Burlington:
Morgan Kaufmann, 1998.
12 Supra note 7, at 11.
13 Hintze, Arend. "Understanding the Four Types of AI, from Reactive Robots to Self-aware Beings." The
Conversation. Last modified November 14, 2016. (https://theconversation.com/understanding-the-four-types-of-
ai-from-reactive-robots-to-self-aware-beings-67616).
3|Page
making current decisions. From this definition, it clearly follows that reactive machines will
react in the same manner when faced with a situation that they have encountered before.
They don’t have the ability to do any task apart from the specific ones it was programmed
to perform. Examples include IBM’s Deep Blue and Google’s AlphaGO.
Limited memory machines are those which can revisit their past experiences by identifying
specific important objects and monitoring them over time. These observations are then
added to the machines’ preprogrammed representations of the world and are subsequently
made use of while taking decisions. But these machines have just enough memory to take
decisions and execute the same. For example, self-driving cars can observe the speed and
direction of other cars and use this data to decide when to change lanes so as to refrain
from hitting or getting hit by another car.
Theory of mind machines is named after a concept of psychology, according to which
people, creatures, and objects in the world can have thoughts and emotions which affect
their behavior.14 Such machines can form representations about the world and other
entities, and adjust their behavior according to the expectations, intentions, feelings, and
motivations.
Lastly, self-aware machines operate at the ultimate stage of AI i.e. self-awareness; they can
make representations about their selves. Such machines are sentient, conscious, and can
understand others’ feelings. These machines not only know what they want but also
understand that they want it and why they want it.
The third and fourth types of AI can perhaps only be found in sci-fi movies. C-3PO and R2D2
from the Star Wars movies are examples of the former, while EVA from Ex Machina is an
example of the latter.
7.5. NEED FOR CONFERRING LEGAL ‘PERSONHOOD’ ON ARTIFICIALLY INTELLIGENT MACHINES
From Amazon’s Alexa which can change one’s home’s lighting, play music, etc. to Roomba
which can clean an area on its own to self-driving cars; AI has truly been of humongous help
to humanity. But we shouldn’t be so enamored by the positive effects of AI on our lives that
we turn a blind eye to the other side of the coin.
14 Premack, David, and Guy Woodruff. "Does the chimpanzee have a theory of mind?" Behavioral and Brain
Sciences 1, no. 4 (1978), 515-526. (doi:10.1017/s0140525x00076512).
4|Page
In October 2017, security researchers found that some Google Home Minis had
been secretly recording the audio of their owners and sending the same to Google.
In November of the same year, a Vietnamese security firm called Bkav got around the Face
ID feature of an iPhone X; a mask with a 3D-printed base was used in order to convince the
phone that it was a human. Moreover, the firm stated that the mask cost only about $150.
In March 2018, a self-driving Uber killed a pedestrian in Tempe, Arizona, while functioning in
autonomous mode.
Thus, it’s imperative that laws are enacted in order to regulate AI. In the United States, the
discussion about the regulation of AI has gathered momentum. Germany is the first nation
in the world to have drawn up ethical rules for autonomous vehicles providing that human
life ought to always be prioritized over animal life or property. Korea, Japan, and China are
developing a law on self-driven cars, following the German model.
As far as India is concerned, although NITI Aayog released a policy paper entitled ‘National
Strategy for Artificial Intelligence’ in June 2018 which considered the importance of AI in
different sectors and the 2019 Budget proposed to launch a national program on AI, there
exists no comprehensive legislation to regulate AI.
Moreover, irrespective of whether the self-driving car itself (AI) or Uber Technologies Inc. is
held to be responsible for the mishap, the Uber example raises a pertinent question that in
cases where the programmer has programmed the AI in good faith, there isn’t any
deficiency in programming and the AI acts autonomously, whether the artificially intelligent
entity should be held criminally liable for causing legal injury to anyone? For this, it’s
necessary that AIs are recognized as legal persons and thus, another important question to
ponder upon is whether AIs should be granted legal ‘personhood’? This question is relevant
not only in criminal law but also in civil law, in case of contract law (agency)15 and tortious
liability; the list may increase with development in the field of AI. There is no country in the
15 Allgrove, Ben. "Legal Personality for Artificial Intellects: Pragmatic Solution or Science Fiction?" SSRN
Electronic Journal, 2004. (doi:10.2139/ssrn.926015).
5|Page
world that legally recognizes AIs to be legal persons. The general rule has been that robots
can’t be held accountable in any situation since they aren’t legal persons.16
The only laws relating to AIs are the ‘Three Laws of Robotics’ given by Isaac Asimov in his
book entitled ‘I, Robot’ (1950).17 They are as follows:
1. “A robot may not injure a human being or, through inaction, allow a human being to
come to harm.
2. A robot must obey orders given it by human beings except where such orders would
conflict with the First Law.
3. A robot must protect its existence as long as such protection does not conflict with
the First or Second Law.”
These laws serve as a starting point for lawmakers around the world.
7.6. LEGAL PERSONHOOD
Artificially intelligent entities can’t be considered equal to natural persons i.e. human beings
since the former lack: (a) a soul, (b) intentionality, (c) feelings, and (d) interests.18 However,
then the question arises whether legal personhood is limited to natural persons? In his book
entitled “The Nature and Sources of the Law”, John Chipman Gray discussed the concept of
‘legal personhood’.19 He stated that “In books of the Law, as in other books, and common
speech, ‘person’ is often used as meaning a human being, but the technical legal meaning of
a ‘person’ is a subject of legal rights and duties.”20 Although the particular set of legal rights
and duties depends on the nature of the entity; legal personhood is usually accompanied by
the right to own property, the right to sue, and the right to be sued.21
There is evidence to demonstrate that the notion of ‘legal personality’ isn’t been limited to
natural persons. The notion of ‘legal personality’ originated in the 13th Century by Pope
16 Supra note 5, at 21.
17 Asimov, Isaac. I, Robot. London: HarperCollins UK, 1950 .
18 Solum, Lawrence. "Legal Personhood for Artificial Intelligences." North Carolina Law Review 70, no. 4
(January 1992). (https://scholarship.law.unc.edu/cgi/viewcontent.cgi?article=3447&context=nclr).
19 Gray, John C. The Nature and Sources of the Law. 1909.
20 Id.
21 Supra note 18, at 1239.
6|Page
Innocent IV who founded the persona ficta (fictitious person) doctrine, recognizing the legal
existence to monasteries apart from monks.22
Over the years, this legal doctrine has developed further and many other entities have been
recognized as legal entities separate from their owners or users. In the international arena,
examples include sovereign States and international and intergovernmental organizations
like the United Nations and the European Union. In the national jurisdictions, generally, all
counties treat companies and other forms of business associations as separate legal entities.
Generally, ships are considered legal persons under Maritime Law and legal status has been
attributed to animals under several national jurisdictions.
Moreover, in India courts have recognized Hindu idols as legal entities,23 considering them
capable of having the legal right of owning property and the legal duty of paying taxes24.25
While in New Zealand, the Whanganui River was granted legal personhood in March 2017
since the people belonging to the Whanganui Māori tribe regard the river as their
ancestor.26
A legal person is one who is subject to legal rights and duties. Moreover, evidence shows
that the legal status of people, objects, animals, and other realities (like companies and
rivers) varies from one jurisdiction to the other and over the years, even inside the same
jurisdiction and regarding the same reality.27 Thus, legal personhood isn’t granted on the
basis of being a ‘natural person’, but as a consequence of legislative options based on moral
considerations, the attempt to reflect social realities in the legal framework or merely legal
convenience.28 Thus, it’s pertinent to determine whether artificially intelligent entities are
morally entitled to be recognized as separate legal entities, whether doing so would reflect a
social reality or whether it would serve legal convenience?
22 Dewey, John. "The Historic Background of Corporate Legal Personality." The Yale Law Journal 35, no. 6
(1926), 655. (doi:10.2307/788782).
23 Pramatha Nath Mullick v. Pradyumna Kumar Mullick, (1925) 27 B.O.M.L.R. 1064 (India).
24 Yogendra Nath Naskar v. Commissioner Of Income Tax, 1969 A.I.R. 1089 (India).
25 Supra note 7, at 16 & 17.
26 Roy, Eleanor A. "New Zealand River Granted Same Legal Rights As Human Being." The Guardian. Last
modified March 16, 2017. (https://www.theguardian.com/world/2017/mar/16/new-zealand-river-granted-same-
legal-rights-as-human-being).
27 Supra note 7, at 17 & 18.
28 Allen, Tom, and Robin Widdison. "Can Computers Make Contracts?" Harvard Journal of Law and
Technology 9 (Winter 1996). (http://jolt.law.harvard.edu/articles/pdf/v09/09HarvJLTech025.pdf).
7|Page
7.7. ARGUMENTS IN FAVOUR OF GRANTING LEGAL PERSONHOOD TO ARTIFICIALLY INTELLIGENT
MACHINES
Whether artificially intelligent entities are morally entitled to be considered as separate legal
entities?
Before tackling this question it’s important to understand which realities are morally
entitled to be considered as legal persons and what is/are the attributes/s that they should
possess?29 Such realities are humans and animals, and such attributes are the abilities to
behave in an autonomous manner and have subjective experiences. Thus, even for AIs, the
important considerations for being morally entitled to be a separate legal person are the
capability to act autonomously and have subjective experiences.
“A robot’s autonomy can be defined as the ability to take decisions and implement them in
the outside world, independently of external control or influence.”30 Considering the various
types of AI mentioned above in this context, it can be concretely stated that self-aware
machines and machines with a theory of mind possess this trait; and reactive machines are
not autonomous.31 Although machines with limited memory can’t be strictly termed as
autonomous, since they have the capacity to add their observations to their decision-
making processes, it can be argued that they act in an autonomous manner.32
Regarding the ability to have subjective experiences, it depends on self-awareness. Like
humans and animals, a machine has a subjective experience when it forms representations
about itself that affect its ability to feel or perceive reality.33 Only sentient machines are able
to have subjective experiences. Self-aware machines are able to have subjective
experiences, while machines with limited memory, those with a theory of mind and reactive
machines aren’t able to do so since they lack sentience.
Thus, it can be concluded that self-aware machines are morally entitled to be considered as
separate legal entities since they are autonomous and have the ability to have subjective
experiences. On the other hand, machines with limited memory, those with a theory of
29 Supra note 7, at 18.
30 European Parliament, Civil Law Rules on Robotics, 2017,
(https://www.europarl.europa.eu/doceo/document/TA-8-2017-0051_EN.html?redirect).
31 Supra note 7, at 18.
32 Id. at 18.
33 Id. at 19.
8|Page
mind and reactive machines aren’t morally entitled to the same. This is because the first two
can’t have subjective experiences even though they are autonomous, while reactive
machines fulfill neither of the two requirements.
Therefore, in order to make a case for granting legal status to the latter three categories of
AI, the basis has to be considerations other than morality.
Whether granting legal status to artificially intelligent machines would reflect a social
reality?
The field of AI is progressing at an ever-increasing pace and hence, it can be said that in the
future people will perceive AI as autonomous bodies and parties to transactions, similar to
how the society currently views corporations as legal entities separate from their members.
When this happens, the law will be forced to give legal effect to this social reality.34
Whether granting legal status to artificially intelligent entities would be legally convenient?
Generally, ships are considered as legal persons under Maritime Law. This then allows
“those who have an interest in the ship’s business to subject it to a form of arrest.”35 People
generally don’t think of ships being morally entitled to legal personhood; nor do they
consider ships as real, extra-legal personalities.36 Ships are given a legal status only because
it serves “a valuable legal purpose in a convenient and relatively inexpensive manner.”37
This very same rationale can be applied to the case of AI entities as well since treating them
as separate legal entities would be of legal convenience.38 This is because if AIs are treated
as legal persons then it will be possible to solve the issues of liability in case of civil and
criminal matters. Moreover, this would also give legal systems the opportunity to frame an
adequate legal status for these AIs, replete with legal rights and duties appropriate to their
characteristics, instead of merely attempting to fit these entities under the existing legal
34 Supra note 15.
35 Supra note 28.
36 Id.
37 Id.
38 Supra note 7, at 20.
9|Page
framework drafted for a different reality, such as humans, animals or objects, which
wouldn’t necessarily suit them.39
But this logic doesn’t apply to all types of AI; it only applies to those types of machines that
are able to make autonomous decisions.40 Regarding all the types of machines except
reactive ones, they act in an autonomous manner. On the other hand, reactive machines
can’t make autonomous decisions, their decisions being mere reflex actions of the inputs
given by their designers/owners and they have zero or low level of complexity because no
agent-made observations influence their decision-making processes.41 Thus, self-aware
machines, machines with limited memory, and those with a theory of mind should be
granted separate legal status but the same shouldn’t be conferred on reactive machines
since their conduct can’t be disassociated from their respective designers or owners.
7.7.1. Conclusion
Thus, it can be concluded that self-aware machines should be granted legal status since they
are morally entitled to be granted the same, and doing so would not only reflect a social
reality but also be legally convenient. In the case of machines with limited memory and
those with a theory of mind, they should be recognized as legal persons on the basis of the
second and third parameters. There is no reason in favor of considering reactive machines
as separate legal entities. The particular set of legal rights and duties that AIs would be
subjected to should be decided carefully.
7.8. ARGUMENTS AGAINST GRANTING OF LEGAL PERSONHOOD TO ARTIFICIALLY INTELLIGENT MACHINES
7.8.1 Missing Something
Most of the arguments against granting of legal status to AIs fall within the ambit of what is
called by certain scholars as ‘missing something’42; that something ranging from self-
awareness to consciousness to biological aspects.43 As an example of the legal status of
corporations demonstrates, granting legal personhood to a given reality is a fiction created
by legislators to serve the purpose of regulating “life in society, and commercial and non-
39 Id. at 20.
40 Id. at 20.
41 Id. at 20.
42 Supra note 18.
43 Supra note 7, at 20.
10 | P a g e
commercial transactions, and ensure the internal coherence of legal systems.”44 Thus, there
is no ‘something’ which is ‘missing’.
7.8.2 Potential to Undermine the Legal and Moral Position of Humanity
Further, some scholars argue that granting legal personhood to AIs has the potential to
undermine the legal and moral position of humanity.45 However, it is safe to argue that if at
all any harm is caused to the legal and moral position of humanity due to the said act, it will
be because of the development of AI and not by the ex-post granting of separate legal
status.46
7.8.3 Corporations v. Artificial Intelligence
But there is an important difference that exists between corporations and AIs.47
Corporations are fictitiously autonomous; their decision-making process is driven by their
stakeholders. On the other hand, AIs may be actually autonomous; their programmers or
users may not be able to control their actions. Therefore, the legal status of corporations is
merely a starting point for arguing the granting of legal personhood to AIs.
The granting of legal personhood to AIs is predominantly based on the three arguments as
explained above; the analogy with companies merely provides additional support to the
claim. Thus, the above criticism doesn’t hold water.
7.8.4 Identifying the Artificially Intelligent Entity
An important pragmatic question that needs to be answered is how can one identify the
subject AI?48 Is it the ‘vessel’ which is the hardware-defined by its functional abilities or is it
44 Id. at 21.
45 Fischer, John P. "Computers as Agents: A Proposed Approach to Revised U.C.C. Article 2." Indiana Law
Journal 72, no. 2 (Spring 1997).
46 Supra note 7, at 21.
47 Supra note 5, at 24.
48 Supra note 28.
11 | P a g e
the software i.e. a particular set of binary code?49 This question becomes even more
complex in situations where the hardware and software are spread and maintained by
different individuals or from locations, and in cases where the software is able to modify
itself.50 An expensive but possible answer is registration.51 “In the absence of registration, a
purported agreement would have the same status as an agreement made by a corporate
agent which was never properly incorporated.”52 Whatever might be the case, a close nexus
between legislators and AI designers will be necessary for establishing an efficient
identification mechanism.53
7.8.5 The Responsibility Objection54
It is contended that AIs, by nature, wouldn’t be responsible enough in terms of both
fulfilling their obligations and the consequent liability for breach of trust.
7.8.6 The Judgment Objection55
It is argued that AIs can’t make the same judgment calls that humans can make when faced
with similar situations. This contention is primarily based on the moral dilemma of
empowering AIs to make decisions that are moral and subjective.
The latter three objections aren’t sufficient enough to trump the need of granting legal
status to AIs i.e. the need of holding them liable in case they cause legal injury to any
person. Thus, these three objections can be ignored.
7.9 RECENT DEBATE OVER THE EUROPEAN PARLIAMENT’S PROPOSAL TO INTER ALIA GRANT LEGAL STATUS TO
ARTIFICIALLY INTELLIGENT ENTITIES
The Resolution on Civil Law Rules of Robotics with Recommendations to the Commission on
Civil Law Rules on Robotics
49 Supra note 15.
50 Supra note 7, at 21.
51 Id. at 21.
52 Supra note 28.
53 Supra note 7, at 21.
54 Supra note 5, at 13.
55 Id. at 13.
12 | P a g e
On 27 January 2017, the European Parliament’s Committee on Legal Affairs submitted its
‘Report with Recommendations to the Commission on Civil Law Rules on Robotics’.56 On 16
February 2017, the European Parliament adopted a Resolution on Civil Law Rules of
Robotics with recommendations to the Commission on Civil Law Rules on Robotics.57 It is an
official request to the Commission for it to submit an official proposal for civil law rules on
robotics to the European Parliament. The resolution contains a comprehensive set of
recommendations on what the final civil rules should encapsulate, which includes the
following:58
1. “Definition and classification of ‘smart robots’
2. Registration of ‘smart robots’
3. Civil law liability
4. Interoperability, access to code and intellectual property rights
5. Disclosure of use of robots and artificial intelligence by undertakings
6. Charter on Robotics”
Under paragraph 59, the resolution calls on the Commission to evaluate several legal
solutions, including the following:59
f) “creating a specific legal status for robots in the long run, so that at least the most
sophisticated autonomous robots could be established as having the status of
electronic persons responsible for making good any damage they may cause, and
possibly applying electronic personality to cases where robots make autonomous
decisions or otherwise interact with third parties independently;”
56 Committee on Legal Affairs. Report with Recommendations to the Commission on Civil Law Rules on
Robotics. European Parliament, n.d. (https://www.europarl.europa.eu/doceo/document/A-8-2017-
0005_EN.html).
57 Supra note 30.
58 Id.
59 Id. at ¶59.
13 | P a g e
The resolution aims to solve the grey area regarding the liability of AIs, especially in the case
of self-driving cars which may be involved in crashes or automated machinery in the
workplace.
7.9.1 Proponents of the Proposal
The proponents of the idea of granting legal status to AIs, including some manufacturers
and their affiliates, hailed the proposal as common sense.60 They argue that granting legal
personhood wouldn’t make AIs virtual people who would be able to get married and take
advantage of human rights; it would just put them on the same pedestal as corporations.
A Member of the European Parliament (MEP) and Vice-Chair of the European Parliament’s
Legal Affairs Committee, Mady Delvaux said that although she wasn’t certain regarding
granting legal status to AIs, she was “more and more convinced” that the existing legal
framework was inadequate to tackle the complicated issues surrounding liability and self-
learning machines, and thus, all possible options should be taken under consideration.61
Advocates of the proposal argue that similar to the legal model for companies, granting
legal status to AIs would be more about holding them liable in case they cause legal injury to
anyone and less about giving rights to them.62
On a similar basis, Delvaux emphasized that the intention behind suggesting an electronic
personality was to ensure that an AI is and will remain a machine with human backing, and
not about granting them human rights.63
7.9.2 Opposition to the Proposal
But the proposal has met strong opposition from 156 AI experts belonging to 14 European
nations, including computer scientists, law professors, and CEOs, who collectively wrote a
letter to the European Commission voicing their opinions.64 They argued that giving
recognition to AIs as separate legal persons would be ‘inappropriate’ from a ‘legal and
60 "Europe Divided over Robot ‘personhood’." POLITICO. Last modified April 11, 2018.
(https://www.politico.eu/article/europe-divided-over-robot-ai-artificial-intelligence-personhood).
61 Id.
62 Id.
63 Id.
64 Id.
14 | P a g e
ethical’ standpoint. This argument can be countered since AIs are morally entitled to be
granted legal personhood, doing so would not only reflect a social reality but also be legally
convenient, and the same is need so as to hold AIs liable in case they cause legal injury to
any person.
The letter also contends that granting legal personhood would contradict human rights
laws since AIs would have the right to dignity, integrity, citizenship, and remuneration. This
contention can be negated since the rationale behind granting legal personhood isn’t to give
human rights to AIs but to hold them liable.
Nathalie Navejans, a French law professor at the Université d’Artois, stated that “By
adopting legal personhood, we are going to erase the responsibility of manufacturers.” The
point about granting legal status to AIs isn’t to absolve manufacturers from their liability but
to correctly hold the AIs responsible when there is no deficiency in programming or malice
on part of the manufacturers.
7.10. CIVIL LIABILITY
Now that we have answered the question whether AIs should be granted legal status?, in
the affirmative, the next question that needs to be answered is that in cases where the
programmer has programmed the AI in good faith, there isn’t any deficiency in
programming and the AI acts autonomously, whether the artificially intelligent entity should
be held liable for causing legal injury to anyone?
This part will deal with civil liability and the next one will discuss criminal liability.
Regarding the civil liability of AIs, we face the following trade-off: holding the AI liable would
simultaneously absolve its designer from the same.65 In addition to this, the issue of AIs’
liability arises only in situations where these machines make autonomous decisions.66
In cases, where an AI is programmed or used to take a specific action and it acts accordingly,
the AI is simply a means to an end. Thus, if a programmer/user programs or uses an AI as
above in order to make it commit a civil wrong, the programmer or user should be directly
65 Supra note 7, at 27.
66 Id. at 27.
15 | P a g e
held responsible.67 Bearing this in mind, the programmers/users of reactive machines
should be held liable for such machines’ actions since they are incapable of making
autonomous decisions.
However, complications arise in allocating liability when AIs make autonomous decisions. At
this juncture, it’s important to differentiate between cases where there is a deficiency in
programming and the ones where there isn’t. In the former, the AIs aren’t programmed to
act in the manner which gives rise to liability but they have the ability to make the
autonomous decisions that lead to it due to defective coding.68 In such situations, both the
designer and the AI should be held liable. The latter situations deal with “accountability for
actions that autonomous robots take, not related to coding deficiencies but to their evolving
conduct.”69
Where the AI is developed according to the best practices, there isn’t any defect in the
programming and it was properly tested, and the AI’s action gives rise to liability as a
consequence of its own evolving conduct, is it reasonable to hold the designer
responsible?70
On the one hand, if the designers run the risk of incurring liability even after taking the
maximum care possible, they will soon be scared of developing AIs which will make
technological advancement hit a roadblock. Moreover, a technologic stall may prove to be
counterproductive if our primary concern is safety. For example, self-driving cars will
probably lead to an overall reduction in the number of traffic accidents. On the other hand,
the designer of an AI can’t be absolved of liability because AIs are generally unpredictable. It
isn’t possible to explain to a person who has suffered an injury due to a self-driving car, that
she can’t claim damages from anyone since AIs are generally unpredictable.71
67 Id. at 27.
68 Id. 27.
69 Holdren, John P., Megan Smith, National Science and Technology Council, and Committee on
Technology. PREPARING FOR THE FUTURE OF ARTIFICIAL INTELLIGENCE. Executive Office, President
of the United States, U.S Government, 2017.
(https://obamawhitehouse.archives.gov/sites/default/files/whitehouse_files/microsites/ostp/NSTC/preparing_for
_the_future_of_ai.pdf).
70 Supra note 7, at 28.
71 Gless, Sabine, Emily Silverman, and Thomas Weigend. "If Robots Cause Harm, Who is to Blame? Self-
Driving Cars and Criminal Liability." SSRN Electronic Journal, 2016. (doi:10.2139/ssrn.2724592).
16 | P a g e
Thus, legislators need to draft a liability framework wherein the designers are able to
exempt themselves from liability when they take the maximum care possible, and at the
same time, the victim of the AI’s unpredictable evolving behavior can be compensated for
the legal injury caused to her.
The above can be done through an insurance scheme. Under paragraph 59, the European
Parliament’s Resolution on Civil Law Rules of Robotics with recommendations to the
Commission on Civil Law Rules on Robotics calls on the Commission to evaluate all possible
legal solutions, including the following:72
1. “establishing a compulsory insurance scheme where relevant and necessary for
specific categories of robots whereby, similarly to what already happens with cars,
producers, or owners of robots would be required to take out insurance cover for the
damage potentially caused by their robots;
3. allowing the manufacturer, the programmer, the owner or the user to benefit from
limited liability if they contribute to a compensation fund, as well as if they jointly
take out insurance to guarantee compensation where damage is caused by a robot;”
The solutions suggested above for autonomous AIs should be applicable to machines with
limited memory and those with a theory of mind, but not to self-aware AIs.73 This is because
such machines are completely autonomous since they are sentient and conscious. For
instance, while a self-aware AI decides to act in a certain manner which gives rise to liability
as a result of its own judgment, a machine with a theory of mind thinks that it is doing so,
but that thinking is the result of being (directly or not) programmed to think so. For self-
aware AIs, the rules of liability should be those which are applicable to humans, mutatis
mutandis.
7.11 CRIMINAL LIABILITY
Now, we move on to discuss the criminal liability of AIs. In the year 2015, more than 1000 AI
and robotics scholars, including Elon Musk and Stephen Hawking, issued a statement
72 Supra note 30, at ¶59.
73 Supra note 7, at 30.
17 | P a g e
warning about the devastating consequences of autonomous weaponry.74 As stated earlier,
there is no country in the entire world that grants legal personhood to AIs. Thus, there is
definitely no jurisdiction across the globe where AIs fall within the ambit of criminal law.
Again, as stated earlier, Asimov’s ‘Three Laws of Robotics’ are the only ones that exist. Later,
he added a fourth law to the list which was the ‘Zeroth Law’ which preceded all the others
in priority. It’s as follows:
0. “A robot may not harm humanity, or, by inaction, allow humanity to come to harm.”
According to Gabriel Hallevy, the primary question which arises is what kind of laws are
required to be enacted in order to tackle the situation, and who will decide the same?75
Generally, the two essential elements that are required to be proven in order to hold any
person as criminally liable are: the criminal act (actus reus) and the mental element (mens
rea). Keeping these two elements in mind, Hallevy proposed the following three models
regarding AIs’ criminal liability:76
1. The Perpetration-via-Another Liability Model
Under this model, AIs aren’t considered to possess any human traits. Although this
model considers an AIs’ capability to act as a perpetrator of an offence, the AI is treated
merely as an ‘innocent agent’, or a person with limited mental capability like a minor, a
mentally incompetent person, or one who lacks a criminal state of mind. The person
responsible for making the AI commit the offence is considered to be the real
perpetrator. Thus, in case an AI commits an offence, the Perpetrator-Via-Another would
either be its programmer or its end user.
2. The Natural-Probable-Consequence Liability Model
This model assumes that the programmers or end-users of AIs are deeply involved in the
everyday activities performed by the AIs but without any intention of committing an
offence using the AIs as agents. For example, cases where an AI commits an offence
74 Matney, Lucas. "Hawking, Musk Warn Of ‘Virtually Inevitable’ AI Arms Race" TechCrunch. Last modified
July 27, 2015. (https://techcrunch.com/2015/07/27/artificially-assured-destruction/).
75 Hallevy, Gabriel. "The Criminal Liability of Artificial Intelligence Entities." Akron Intellectual Property
Journal 4, no. 2 (2010).
(https://ideaexchange.uakron.edu/akronintellectualproperty/vol4/iss2/1/?utm_source=ideaexchange.uakron.edu
%2Fakronintellectualproperty%2Fvol4%2Fiss2%2F1&utm_medium=PDF&utm_campaign=PDFCoverPages.)
76 Id.
18 | P a g e
while performing its daily tasks. Under this model, the programmers or end users are
held liable and they are held so not because they had a criminal intention, but because
of their negligence; they should have known about the probability of the forthcoming
commission of the specific offence.
3. The Direct Liability Model
This last model considers that the AI’s actions don’t depend on its programmer or end-
user; it treats the AI as an autonomous entity. Under this model, if both the essential
elements, actus reus, and mens rea, of a specific offence are fulfilled, then the AI would
be held criminally liable similar to how a human or a corporation would be had she/ it
committed the same offence. Although proving actus reus would be a cakewalk, the
attribution of specific intent would be a difficult task.
It’s important to note here that the programmer or end-user may be held criminal liability
along with the AI.77 These three models are to be considered together and determined in
the particular context of the AI’s involvement.78
7.12 PUNISHMENT
An important aspect of criminal liability is punishment. In the case of AIs, a number of
problems arise. In case an AI is convicted for an offence for which punishment is
imprisonment, how would the AI be incarcerated?79 Similarly, how can an AI be sentenced
to capital punishment or probation?80 Since AIs don’t have any wealth, how pragmatic is it
to impose a fine upon them?81 Similar issues were encountered when the criminal liability of
corporations was initially discussed, and as the law was successfully modified for
corporations, the same will happen for AIs as well.82 Hallevy discusses how the following
punishments for humans can be modified accordingly for AIs:83
77 Id.
78 Id.
79 Supra note 5, at 23.
80 Fraser, Erica. "Computers as Inventors – Legal and Policy Implications of Artificial Intelligence on Patent
Law." SCRIPTed 13, no. 3 (December 2016), 305-333. (https://script-ed.org/article/computers-as-inventors-
legal-and-policy-implications-of-artificial-intelligence-on-patent-law/).
81 Supra note 5, at 23.
82 Coffee, John C. ""No Soul to Damn: No Body to Kick": An Unscandalized Inquiry into the Problem of
Corporate Punishment." Michigan Law Review 79, no. 3 (1981), 386.
(https://scholarship.law.columbia.edu/cgi/viewcontent.cgi?article=1564&context=faculty_scholarship).
83 Supra note 75.
19 | P a g e
1. Capital Punishment
The deletion of the AI software would serve the same purpose for an AI as a capital
punishment would for a human.
2. Imprisonment
The purpose behind putting a human behind bars is to deprive her of liberty and impose
severe restrictions on her freedom of movement.84 According to Hallevy, ‘liberty’ or
‘freedom’ in the case of an AI refers to the freedom to act as an AI in its relevant area.85
Thus, putting the AI out of use in its field of work for a particular duration of time could
perhaps restrict its freedom and liberty similar to how incarceration does for humans.
3. Community Service
Similar to community service for humans, the AI offender can be made to work in an
area of its choice so as to benefit society.
4. Fine
Imposing a fine on an AI would only be of benefit in case it owns any property or has
money. If this isn’t the case, the fine can be collected through community service.
7.13 CONCLUSION
In cases where the programmer has programmed the AI in good faith, there isn’t any
deficiency in programming and the AI acts autonomously, it should be held liable for causing
legal injury to anyone. For this, it’s necessary that AIs are recognized as legal persons and
thus, an important question to ponder upon is whether AIs should be granted legal
‘personhood’? There is no country in the world that legally recognizes AIs to be legal
persons.
A legal person is one who is subject to legal rights and duties. Legal personhood isn’t
confined to human beings alone; companies, ships, etc. are also considered as legal persons.
Thus, AIs can fall within the ambit of legal personhood if it can be proven that they should
be subject to legal rights and duties.
84 Rothman, David J. For the Good of All - The Progressive Tradition in Prison Reform. National Criminal
Justice Reference Service, Office of Justice, U.S. Federal Government, 1979.
(https://www.ncjrs.gov/pdffiles1/Digitization/91586NCJRS.pdf); Inciardi, James A., and Charles E. Faupel.
History and Crime: Implications for Criminal Justice Policy. Thousand Oaks: SAGE Publications, 1980.
85 Supra note 80.
20 | P a g e
Thus, it’s pertinent to answer the following three questions: whether artificially intelligent
entities are morally entitled to be recognized as separate legal entities?; whether doing so
would reflect a social reality?; and whether it would serve legal convenience?
Self-aware machines should be granted legal status since they are morally entitled to be
granted the same, and doing so would not only reflect a social reality but also be legally
convenient. In the case of machines with limited memory and those with a theory of mind,
they should be recognized as legal persons on the basis of the second and third parameters.
There is no reason in favor of considering reactive machines as separate legal entities. The
particular set of legal rights and duties that AIs would be subjected to should be decided
carefully.
If a programmer/user programs or uses an AI to take a specific action and it acts accordingly
which leads to a civil wrong, the programmer or user should be directly held responsible.
Thus, the programmers/users of reactive machines should be held liable for such machines’
actions.
In cases where there is a deficiency in programming, the AIs aren’t programmed to act in
the manner which gives rise to liability but they have the ability to make the autonomous
decisions that lead to it due to defective coding. In such situations, both the designer and
the AI should be held liable.
In cases where there is no deficiency in programming, legislators need to draft a liability
framework wherein the designers are able to exempt themselves from liability when they
take the maximum care possible, and at the same time, the victim of the AI’s unpredictable
evolving behavior can be compensated for the legal injury caused to her. This can be done
through an insurance scheme. The European Parliament’s Resolution on Civil Law Rules of
Robotics with recommendations to the Commission on Civil Law Rules on Robotics discusses
insurance and a compensation fund.
The solutions suggested for autonomous AIs should be applicable to machines with limited
memory and those with a theory of mind, but not to self-aware AIs. For self-aware AIs, the
rules of liability should be those which are applicable to humans, mutatis mutandis.
21 | P a g e
The rules for criminal liability of AIs should be based on the three models given by Gabriel
Hallevy which are the Perpetration-via-Another Liability Model, the Natural-Probable-
Consequence Liability Model, and the Direct Liability Model. The punishments for humans
should be modified accordingly for AIs.
While deciding the law relating to the liability of AIs, it would be pertinent that the
legislators take a reasonable and balanced view with regard to the protection of rights of
citizens/individuals and the need to encourage technological growth.86 If the same isn’t
achieved it may either adversely affect the protection of rights or innovation and creativity.
Moreover, the law must also be clear on the rights and duties of the programmers so as to
crystallize the broad ethical standards that they must conform to.87
In the film ‘I, Robot’ (2004), a robot is suspected of killing its own creator. If such a thing
happens in reality, the creator herself would be held liable for her own death. Wouldn’t this
be absurd? But then, so is the fact of not granting legal status to artificially intelligent
entities.
86 Supra note 5, at 28.
87 Id. at 28.
22 | P a g e