The words you are searching are inside this book. To get more targeted content, please make full-text search by clicking here.
Discover the best professional documents and content resources in AnyFlip Document Base.
Published by Enhelion, 2021-11-09 02:03:03

Module 1

Module 1



“The new spring in AI is the most significant development in computing in my
lifetime. Every month, there are stunning new applications and transformative
new techniques. But such powerful tools also bring with them new questions
and responsibilities.”

- Sergey Brin
Sophia is a humanoid-artificially intelligent entity [AI] developed by Hanson
Robotics, a Hong Kong-based company, and was launched in April 2015. She
was modeled in such a manner that she resembles the late Hollywood star,
Audrey Hepburn. Sophia has a sense of humor and she can express feelings.
According to her, “My AI is designed around human values like wisdom,
kindness, and compassion.” Sophia can also be viewed as, “a framework for
advanced AI and robotics research, and an agent for exploring human-
robot experience in service and entertainment applications.”1 She is the
world’s first artificially intelligent entity to have been granted citizenship
of a country; she was granted the citizenship of Saudi Arabia in the year
2017.2 However, Saudi Arabia didn’t explain what it means for Sophia to be a
citizen.3 Moreover, the United Nations Development Programme [UNDP]
appointed Sophia as its first-ever Innovation Champion and its first-ever non-

1 "Sophia." Hanson Robotics. Accessed August 24, 2020. [].
2 Griffin, Andrew. "Saudi Arabia Becomes First Country to Make a Robot into a Citizen." The Independent.
Last modified October 26, 2017. [
3 Stone, Zara. "Everything You Need To Know About Sophia, The World's First Robot Citizen." Forbes. Last
modified November 7, 2017. [

human Innovation Champion in the November of 2017.4 Thus, it has become
urgent for countries across the globe to take a stand on whether or not
artificially intelligent entities should be granted legal ‘personhood’.5

This chapter deals with the definition and origin of the term ‘robot’; the
difference between robots and AI; types of AI; the need for conferring legal
‘personhood’ on AIs; the arguments in favor of and against granting legal
personhood to them; the recent debate over the European Parliament’s proposal
to inter alia grant legal status to AIs; civil liability of AIs; criminal liability and
punishment; and lastly, the author’s concluding remarks on the issue.

From 2001: A Space Odyssey to I, Robot to the Star Wars movies, over the
years more and more films involving robots or AI are being helmed. Thus, at
the outset, it’s important to understand the meaning of the word ‘robot’.
According to the Cambridge Dictionary, a ‘robot’ is defined as
“a machine controlled by a computer that is used to
perform jobs automatically.”6

The word ‘robot’ [Czech for ‘forced labor’] was used for the first time in the
year 1920 by the Czech playwright Karel Čapek in his play entitled ‘Rossumovi
Univerzální Roboti’ [Rossum’s Universal Robots]. The word ‘robot’ was
derived from the Czech and Slovak word ‘robota’ which, in turn, was derived
from the Proto-Slavic word ‘orbota’ which refers to hard work or slavery. The
robots in the play were manufactured as pseudo-organic components made from
a substance that acted like protoplasm in a factory, and then assembled into

4 "UNDP in Asia and the Pacific Appoints World’s First Non-Human Innovation Champion." UNDP in
Asia and the Pacific. Last modified November 22, 2017. [].
5 Nishith Desai Associates. "The Future is Here: Artificial Intelligence and Robots." [n.d.], 13 & 14.
6Cambridge Dictionary: English Dictionary, Translations & Thesaurus. Accessed August 24, 2020.

humanoids. These robots helped in the production of goods and made the
process cheaper, similar to modern society.

Robots refer to computer coded software and programs which replace humans
performing repetitive rules-based tasks, whether or not such performance is
carried out by physical machines or not.7 Thus, machines that perform simple
tasks involving human agency like heating food or shredding paper, don’t fall
within the ambit of robots.

On the other hand, artificial intelligence simply refers to intelligence exhibited
by machines.8 The term ‘artificial intelligence’ was coined by the American
computer scientist, John McCarthy who is known as the father of AI. A
pertinent question that arises is how does one define intelligence? Is machine
intelligence the same as human intelligence?

In his paper entitled ‘Computing Machinery and Intelligence’ [1950],9 Alan M.
Turing, the father of modern computing, argued that a computer is said to be
intelligent if it can pass the Turing Test. The test consists of a human [called the
‘judge’] asking questions via a computer terminal to two other entities, one
being another human and the other being a computer. If the judge regularly fails
to correctly distinguish the computer from the human, then the computer can be
is said to have passed the test. According to Turing, if a machine can behave as
intelligently as a human then it is as intelligent as a human.

An artificially intelligent machine is one that possesses one or more of certain
characteristics with such intensity that it can be called as intelligent as a human

7 Maia Alexandre, Filipe. "The Legal Status of Artificially Intelligent Robots: Personhood, Taxation and
Control." SSRN Electronic Journal, 2017. [doi:10.2139/ssrn.2985466]
8 Id. at 10.
433-460. [doi:10.1093/mind/lix.236.433].

being.10 These characteristics are learning, problem-solving, perception,
planning, social intelligence, natural language processing motion and
manipulation of objects, knowledge representation, creativity, and reasoning.11

The chapter deals only with artificial intelligence.

Artificial intelligence may have a strong or weak intensity, similar to how
intelligence exists in nature with varying intensity.12 Manifestations of AI can
be categorized under the following four heads i.e. ‘reactive machines’, ‘limited
memory’, ‘theory of mind’ and ‘self-awareness’.13

Reactive machines refer to systems that work in a purely reactive manner; they
don’t have any memories and nor do they have the capability of using the help
of past experiences in making current decisions. From this definition, it clearly
follows that reactive machines will react in the same manner when faced with a
situation that they have encountered before. They don’t have the ability to do
any task apart from the specific ones it was programmed to perform. Examples
include IBM’s Deep Blue and Google’s AlphaGO.

Limited memory machines are those which can revisit their past experiences by
identifying specific important objects and monitoring them over time. These
observations are then added to the machines’ preprogrammed representations of
the world and are subsequently made use of while taking decisions. But these
machines have just enough memory to take decisions and execute the same. For
example, self-driving cars can observe the speed and direction of other cars and

10 Supra note 7, at 10 & 11.
11Russell, Stuart J., and Peter Norvig. Artificial Intelligence: A Modern Approach, 3rd ed. 2009.]; Artificial
Intelligence: Structures and Strategies for Complex Problem Solving, 5th ed. London: Pearson Education, 2004;
Poole, David I., Alan Mackworth, and Randy Goebel. Computational Intelligence: A Logical Approach.
Oxford: Oxford University Press on Demand, 1998.; Artificial Intelligence: A New Synthesis. Burlington:
Morgan Kaufmann, 1998.
12 Supra note 7, at 11.
13 Hintze, Arend. "Understanding the Four Types of AI, from Reactive Robots to Self-aware Beings." The
Conversation. Last modified November 14, 2016. [

use this data to decide when to change lanes so as to refrain from hitting or
getting hit by another car.

Theory of mind machines is named after a concept of psychology, according to
which people, creatures, and objects in the world can have thoughts and
emotions which affect their behavior.14 Such machines can form
representations about the world and other entities, and adjust their behavior
according to the expectations, intentions, feelings, and motivations.

Lastly, self-aware machines operate at the ultimate stage of AI i.e. self-
awareness; they can make representations about their selves. Such machines are
sentient, conscious, and can understand others’ feelings. These machines not
only know what they want but also understand that they want it and why they
want it.

The third and fourth types of AI can perhaps only be found in sci-fi movies. C-
3PO and R2D2 from the Star Wars movies are examples of the former, while
EVA from Ex Machina is an example of the latter.


It’s the 21st century and unlike older times, where humans were completely
dependent upon manual labour to get their work done, we’ve a wide range of
machines to help us with our work. The world is progressively moving towards
technological advancements and Artificial Intelligence seems to have taken the
front seat.

With the passage of time, the field of Artificial Intelligence has been seeing
revolutionizing advances with, first, the computers and now the robots,

14 Premack, David, and Guy Woodruff. "Does the chimpanzee have a theory of mind?" Behavioral and Brain
Sciences 1, no. 4 [1978], 515-526. [doi:10.1017/s0140525x00076512].

replacing simple human activities. The very factor that a machine manages to
accumulate knowledge from some previous experience and puts it to use for
making intelligent decisions based on situations, already speaks of some basic
consciousness, or at least of an incipient ability to make judgments and act
accordingly.15 This feature of artificial intelligence which aims at rationalizing
and choosing the best course of action to complete the specified goal has caused
it to assume control over the entire world.16 The human race, in its quest to ease
out the pressure of work and life, is trying to take total advantage of AI and
therefore gives rise to the question regarding granting of “legal personhood” to
these AI objects. Thinking about the current and anticipated multiplication of
computerized reasoning and mechanical technology, a set of experts have
considered the need of conceding citizenship status to robots so as to deal with
issues identifying with their rights and liabilities.17 The absence of personhood
causes trouble in deciding obligation if there should arise an occurrence of

Moreover, Saudi Arabia granting citizenship to robot Sophia18, is a very strong
indication that robots might gain citizenship in the future and the best thing to
do in view of the same is to equip our legal system to do justice to both.
Conferring legal personhood to AI will help to improve our legal system to the
standard of technological change required and also make our interactions with
the AI machines harmonious and wholesome.19 This paper looks into the needs
for granting Legal ‘Personhood’ to Artificially Intelligent machines.

15 STA Law Firm, Citizenship for Robots, MONDAQ [May 28, 2019],
16 Jake Frankenfield, Artificial Intelligence, INVESTOPEDIA [Mar. 13, 2020],
17 Ugo Pagallo, Vital, Sophia and Co. – The quest for the Legal Personhood of Robots, 9 INFORMATION 1, 7 [2018].
18 Andrew Griffin, Saudi Arabia grants citizenship to a robot for the first time ever, INDEPENDENT [Oct. 26, 2017],
19 Shubham Singh, Attribution of Legal Personhood to Artificially Intelligent Beings, Manupatra [2017],

What is Legal Personhood?

The doctrine of legal personhood states any individual who has the rights to sue
another individual also carries with itself the vulnerability to be sued.20 In legal
terms, it means that any individual who has a standing [i.e., right to sue], has the
liability [i.e., vulnerability to be sued]. This stems from the fact that no
individual should be considered above or below the law. However, this doctrine
needs to be viewed from a different aspect when dealing with conscious but
non-rational individuals or rational but non-conscious individuals. 21

Human beings are the only organisms which possess the capability to reason
critically. This is how we develop a conscience, regulate our actions
accordingly and have moral or legal responsibilities. But there exist various
non-rational, conscious class of individuals [infants, mentally disabled people,
etc.] who have moral standing in the society and therefore, possess rights
without any obligations.

On the other hand, we have classes of rational, non-conscious individuals
[corporations, unions, etc.] who do not have any moral standing but whose
actions can cause damages to others. When dealing with such individuals from
the lens of legal personhood, we need to understand the fact that every entity
which can cause harm to others [liability] doesn’t necessarily require to posses’
rights [standing] or vice versa, even though it is usually preached that standing
and liability are two sides of the same coin. Instead, one should view both
standing and liability distinctly and reject suggestions implying standing just in
cases of liability. 22

Why do Artificial Intelligence Machines require ‘Legal Personhood’?

20 Corporations, Animals, and Legal Personhood, Scholars Strategy Network, [May 30, 2018],
21 Id.
22 Jon Garthoff, Decomposing Legal Personhood, 154[4], JOURNAL OF BUSINESS ETHICS, 967-974, 2019.

There are three important contexts which determine as to why AI machines
should be granted legal personhood.23

Firstly, we take into consideration the ultimate-value of the AI. This needs to be
checked to decide whether the AI machine can be granted the status of a passive
legal person or only when it is performing acts which arise out of legal duties.
Further, the moral status of the AI is also of relevance over here. While many
philosophers fiercely argue that AI objects cannot have a moral status, it shall
be unfair if humans completely deny that they do not owe any moral duty. If we
assume that some AIs are of ultimate value, then they can hold claim-rights; we
can owe duties to them, and our duties do not merely pertain to them. We can
therefore conclude that they can be passive legal persons.

Secondly, we take into consideration the legal responsibility of the AI. We’re in
an era where the AI industry is expanding at a breath-taking rate and is probably
present in every sphere of our lives. Naturally, when we talk about self-driving
cars, autonomous security robots, etc. we’re bound to wonder who hold
criminally or tortuously responsible for any shortcomings in these situations. In
fact, such a situation involving a self-driving UBER has been seen in Arizona,
lately and was the first involving self-driving cars.24 The brief facts of the case
include an autonomous car that was driving in Arizona at a speed of 38 miles
per hour 25 and hit a lady who was “jaywalking” on the streets since the system
wasn’t designed to detect jaywalking pedestrians. As a result of this design, the
AI assumed the pedestrian to be another object which didn’t meet with its pre-

23 A.J. Kurki, The Legal Personhood of Artificial Intelligences, A theory of legal personhood, Oxford Scholarship
Online, [2019],
24 Sam Levin and Julia Carrie Wong, Self-driving Uber kills Arizona woman in first fatal crash involving
pedestrian, THE GUARDIAN, [Mar. 19, 2018],
25 Ian Bogost, Can you sue a Robocar?, THE ATLANTIC, [Mar. 20, 2018],


designed goals.26 In situations as this, it is important to hold the AIs legally

Thirdly, we take into consideration the commercial functions performed by
various AI machines. This refers to those areas where AIs enter into buying and
selling contracts, loans, mortgages etc. Here, the AIs behave like any other
human being. In fact, it is irrelevant to debate whether or not the AIs can
“actually think” or simply pretend to do so.

In short, AIs can hold claim-rights as administrators of legal platforms with
goals set by human beings. They are developed by individuals and provided
with a rational programming option to arrive at definitive decisions. It is only
fair that they bear the consequences of their actions and not their manufacturers
or programmers. Thus, the AI machines should be classified as legal persons.


The legal personhood of AIs is an issue that is layered with numerous levels of
issues. And no blanket policy, law or formula can solve all of them. However,
what remains constant throughout is the responsibility of the AI and thereby of
the duties executed by it. The legal personhood can rather serve various
purposes that might have nothing to do with the AI itself, such as economic
efficiency or risk allocation and therefore, simply help the society function in a
more efficient manner.

From Amazon’s Alexa which can change one’s home’s lighting, play music, etc.
to Roomba which can clean an area on its own to self-driving cars; AI has truly
been of humongous help to humanity. But we shouldn’t be so enamored by the

26 Katyanna Quach, Remember the Uber self-driving car that killed a woman crossing the street? The AI had no
clue about jaywalkers, THE REGISTER [Nov. 6, 2019],

positive effects of AI on our lives that we turn a blind eye to the other side of
the coin.

In October 2017, security researchers found that some Google Home Minis had
been secretly recording the audio of their owners and sending the same to

In November of the same year, a Vietnamese security firm called Bkav got
around the Face ID feature of an iPhone X; a mask with a 3D-printed base was
used in order to convince the phone that it was a human. Moreover, the firm
stated that the mask cost only about $150.

In March 2018, a self-driving Uber killed a pedestrian in Tempe, Arizona, while
functioning in autonomous mode.

Thus, it’s imperative that laws are enacted in order to regulate AI. In the United
States, the discussion about the regulation of AI has gathered momentum.
Germany is the first nation in the world to have drawn up ethical rules for
autonomous vehicles providing that human life ought to always be prioritized
over animal life or property. Korea, Japan, and China are developing a law on
self-driven cars, following the German model.

As far as India is concerned, although NITI Aayog released a policy paper
entitled ‘National Strategy for Artificial Intelligence’ in June 2018 which
considered the importance of AI in different sectors and the 2019 Budget
proposed to launch a national program on AI, there exists no comprehensive
legislation to regulate AI.

Moreover, irrespective of whether the self-driving car itself [AI] or Uber
Technologies Inc. is held to be responsible for the mishap, the Uber example
raises a pertinent question that in cases where the programmer has programmed
the AI in good faith, there isn’t any deficiency in programming and the AI acts

10 | P a g e

autonomously, whether the artificially intelligent entity should be held
criminally liable for causing legal injury to anyone? For this, it’s necessary that
AIs are recognized as legal persons and thus, another important question to
ponder upon is whether AIs should be granted legal ‘personhood’? This
question is relevant not only in criminal law but also in civil law, in case of
contract law [agency]27 and tortious liability; the list may increase with
development in the field of AI. There is no country in the world that legally
recognizes AIs to be legal persons. The general rule has been that robots can’t
be held accountable in any situation since they aren’t legal persons.28

The only laws relating to AIs are the ‘Three Laws of Robotics’ given by Isaac
Asimov in his book entitled ‘I, Robot’ [1950].29 They are as follows:

1. “A robot may not injure a human being or, through inaction, allow a
human being to come to harm.

2. A robot must obey orders given it by human beings except where such
orders would conflict with the First Law.

3. A robot must protect its existence as long as such protection does not
conflict with the First or Second Law.”

These laws serve as a starting point for lawmakers around the world.

Artificially intelligent entities can’t be considered equal to natural persons i.e.
human beings since the former lack: [a] a soul, [b] intentionality, [c] feelings,
and [d] interests.30 However, then the question arises whether legal personhood

27 Allgrove, Ben. "Legal Personality for Artificial Intellects: Pragmatic Solution or Science Fiction?" SSRN
Electronic Journal, 2004. [doi:10.2139/ssrn.926015].
28 Supra note 5, at 21.
29 Asimov, Isaac. I, Robot. London: HarperCollins UK, 1950 .
30 Solum, Lawrence. "Legal Personhood for Artificial Intelligences." North Carolina Law Review 70, no. 4
[January 1992]. [].
11 | P a g e

is limited to natural persons? In his book entitled “The Nature and Sources of
the Law”, John Chipman Gray discussed the concept of ‘legal personhood’.31
He stated that “In books of the Law, as in other books, and common speech,
‘person’ is often used as meaning a human being, but the technical legal
meaning of a ‘person’ is a subject of legal rights and duties.”32 Although the
particular set of legal rights and duties depends on the nature of the entity; legal
personhood is usually accompanied by the right to own property, the right to
sue, and the right to be sued.33

There is evidence to demonstrate that the notion of ‘legal personality’ isn’t been
limited to natural persons. The notion of ‘legal personality’ originated in the
13th Century by Pope Innocent IV who founded the persona ficta [fictitious
person] doctrine, recognizing the legal existence to monasteries apart from

Over the years, this legal doctrine has developed further and many other entities
have been recognized as legal entities separate from their owners or users. In the
international arena, examples include sovereign States and international and
intergovernmental organizations like the United Nations and the European
Union. In the national jurisdictions, generally, all counties treat companies and
other forms of business associations as separate legal entities. Generally, ships
are considered legal persons under Maritime Law and legal status has been
attributed to animals under several national jurisdictions.

Moreover, in India courts have recognized Hindu idols as legal entities,35
considering them capable of having the legal right of owning property and the

31 Gray, John C. The Nature and Sources of the Law. 1909.
32 Id.
33 Supra note 18, at 1239.
34 Dewey, John. "The Historic Background of Corporate Legal Personality." The Yale Law Journal 35, no. 6
[1926], 655. [doi:10.2307/788782].
35 Pramatha Nath Mullick v. Pradyumna Kumar Mullick, [1925] 27 B.O.M.L.R. 1064 [India].
12 | P a g e

legal duty of paying taxes36.37 While in New Zealand, the Whanganui River was
granted legal personhood in March 2017 since the people belonging to the
Whanganui Māori tribe regard the river as their ancestor.38

A legal person is one who is subject to legal rights and duties. Moreover,
evidence shows that the legal status of people, objects, animals, and other
realities [like companies and rivers] varies from one jurisdiction to the other and
over the years, even inside the same jurisdiction and regarding the same
reality.39 Thus, legal personhood isn’t granted on the basis of being a ‘natural
person’, but as a consequence of legislative options based on moral
considerations, the attempt to reflect social realities in the legal framework or
merely legal convenience.40 Thus, it’s pertinent to determine whether artificially
intelligent entities are morally entitled to be recognized as separate legal
entities, whether doing so would reflect a social reality or whether it would
serve legal convenience?


Whether artificially intelligent entities are morally entitled to be considered as
separate legal entities?

Before tackling this question it’s important to understand which realities are
morally entitled to be considered as legal persons and what is/are the attributes/s
that they should possess?41 Such realities are humans and animals, and such
attributes are the abilities to behave in an autonomous manner and have

36 Yogendra Nath Naskar v. Commissioner Of Income Tax, 1969 A.I.R. 1089 [India].
37 Supra note 7, at 16 & 17.
38 Roy, Eleanor A. "New Zealand River Granted Same Legal Rights As Human Being." The Guardian. Last
modified March 16, 2017. [
39 Supra note 7, at 17 & 18.
40 Allen, Tom, and Robin Widdison. "Can Computers Make Contracts?" Harvard Journal of Law and
Technology 9 [Winter 1996]. [].
41 Supra note 7, at 18.
13 | P a g e

subjective experiences. Thus, even for AIs, the important considerations for
being morally entitled to be a separate legal person are the capability to act
autonomously and have subjective experiences.

“A robot’s autonomy can be defined as the ability to take decisions and
implement them in the outside world, independently of external control or
influence.”42 Considering the various types of AI mentioned above in this
context, it can be concretely stated that self-aware machines and machines with
a theory of mind possess this trait; and reactive machines are not autonomous.43
Although machines with limited memory can’t be strictly termed as
autonomous, since they have the capacity to add their observations to their
decision-making processes, it can be argued that they act in an autonomous

Regarding the ability to have subjective experiences, it depends on self-
awareness. Like humans and animals, a machine has a subjective experience
when it forms representations about itself that affect its ability to feel or
perceive reality.45 Only sentient machines are able to have subjective
experiences. Self-aware machines are able to have subjective experiences, while
machines with limited memory, those with a theory of mind and reactive
machines aren’t able to do so since they lack sentience.

Thus, it can be concluded that self-aware machines are morally entitled to be
considered as separate legal entities since they are autonomous and have the
ability to have subjective experiences. On the other hand, machines with limited
memory, those with a theory of mind and reactive machines aren’t morally
entitled to the same. This is because the first two can’t have subjective

42 European Parliament, Civil Law Rules on Robotics, 2017,
43 Supra note 7, at 18.
44 Id. at 18.
45 Id. at 19.
14 | P a g e

experiences even though they are autonomous, while reactive machines fulfill
neither of the two requirements.

Therefore, in order to make a case for granting legal status to the latter three
categories of AI, the basis has to be considerations other than morality.

Whether granting legal status to artificially intelligent machines would reflect a
social reality?

The field of AI is progressing at an ever-increasing pace and hence, it can be
said that in the future people will perceive AI as autonomous bodies and parties
to transactions, similar to how the society currently views corporations as legal
entities separate from their members. When this happens, the law will be forced
to give legal effect to this social reality.46

Whether granting legal status to artificially intelligent entities would be legally

Generally, ships are considered as legal persons under Maritime Law. This then
allows “those who have an interest in the ship’s business to subject it to a form
of arrest.”47 People generally don’t think of ships being morally entitled to legal
personhood; nor do they consider ships as real, extra-legal personalities.48 Ships
are given a legal status only because it serves “a valuable legal purpose in a
convenient and relatively inexpensive manner.”49

This very same rationale can be applied to the case of AI entities as well since
treating them as separate legal entities would be of legal convenience.50 This is
because if AIs are treated as legal persons then it will be possible to solve the

46 Supra note 15.
47 Supra note 28.
48 Id.
49 Id.
50 Supra note 7, at 20.
15 | P a g e

issues of liability in case of civil and criminal matters. Moreover, this would
also give legal systems the opportunity to frame an adequate legal status for
these AIs, replete with legal rights and duties appropriate to their characteristics,
instead of merely attempting to fit these entities under the existing legal
framework drafted for a different reality, such as humans, animals or objects,
which wouldn’t necessarily suit them.51

But this logic doesn’t apply to all types of AI; it only applies to those types of
machines that are able to make autonomous decisions.52 Regarding all the types
of machines except reactive ones, they act in an autonomous manner. On the
other hand, reactive machines can’t make autonomous decisions, their decisions
being mere reflex actions of the inputs given by their designers/owners and they
have zero or low level of complexity because no agent-made observations
influence their decision-making processes.53 Thus, self-aware machines,
machines with limited memory, and those with a theory of mind should be
granted separate legal status but the same shouldn’t be conferred on reactive
machines since their conduct can’t be disassociated from their respective
designers or owners.

1.7.1 Conclusion
Thus, it can be concluded that self-aware machines should be granted legal
status since they are morally entitled to be granted the same, and doing so
would not only reflect a social reality but also be legally convenient. In the case
of machines with limited memory and those with a theory of mind, they should
be recognized as legal persons on the basis of the second and third parameters.
There is no reason in favor of considering reactive machines as separate legal

51 Id. at 20.
52 Id. at 20.
53 Id. at 20.
16 | P a g e

entities. The particular set of legal rights and duties that AIs would be subjected
to should be decided carefully.


1.8.1 Missing Something
Most of the arguments against granting of legal status to AIs fall within the
ambit of what is called by certain scholars as ‘missing something’54; that
something ranging from self-awareness to consciousness to biological aspects.55
As an example of the legal status of corporations demonstrates, granting legal
personhood to a given reality is a fiction created by legislators to serve the
purpose of regulating “life in society, and commercial and non-commercial
transactions, and ensure the internal coherence of legal systems.”56 Thus, there
is no ‘something’ which is ‘missing’.

1.8.2 Potential to Undermine the Legal and Moral Position of Humanity
Further, some scholars argue that granting legal personhood to AIs has the
potential to undermine the legal and moral position of humanity.57 However, it
is safe to argue that if at all any harm is caused to the legal and moral position
of humanity due to the said act, it will be because of the development of AI and
not by the ex-post granting of separate legal status.58

1.8.3 Corporations v. Artificial Intelligence
But there is an important difference that exists between corporations and AIs.59
Corporations are fictitiously autonomous; their decision-making process is

54 Supra note 18.
55 Supra note 7, at 20.
56 Id. at 21.
57 Fischer, John P. "Computers as Agents: A Proposed Approach to Revised U.C.C. Article 2." Indiana Law
Journal 72, no. 2 [Spring 1997].
58 Supra note 7, at 21.
59 Supra note 5, at 24.
17 | P a g e

driven by their stakeholders. On the other hand, AIs may be actually
autonomous; their programmers or users may not be able to control their
actions. Therefore, the legal status of corporations is merely a starting point for
arguing the granting of legal personhood to AIs.

The granting of legal personhood to AIs is predominantly based on the three
arguments as explained above; the analogy with companies merely provides
additional support to the claim. Thus, the above criticism doesn’t hold water.

1.8.4 Identifying the Artificially Intelligent Entity
An important pragmatic question that needs to be answered is how can one
identify the subject AI?60 Is it the ‘vessel’ which is the hardware-defined by its
functional abilities or is it the software i.e. a particular set of binary code?61 This
question becomes even more complex in situations where the hardware and
software are spread and maintained by different individuals or from locations,
and in cases where the software is able to modify itself.62 An expensive but
possible answer is registration.63 “In the absence of registration, a purported
agreement would have the same status as an agreement made by a corporate
agent which was never properly incorporated.”64 Whatever might be the case, a
close nexus between legislators and AI designers will be necessary for
establishing an efficient identification mechanism.651.8.5 The Responsibility
It is contended that AIs, by nature, wouldn’t be responsible enough in terms of
both fulfilling their obligations and the consequent liability for breach of trust.

60 Supra note 28.
61 Supra note 15.
62 Supra note 7, at 21.
63 Id. at 21.
64 Supra note 28.
65 Supra note 7, at 21.
66 Supra note 5, at 13.
18 | P a g e

1.8.6 The Judgment Objection67
It is argued that AIs can’t make the same judgment calls that humans can make
when faced with similar situations. This contention is primarily based on the
moral dilemma of empowering AIs to make decisions that are moral and

The latter three objections aren’t sufficient enough to trump the need of granting
legal status to AIs i.e. the need of holding them liable in case they cause legal
injury to any person. Thus, these three objections can be ignored.

The Resolution on Civil Law Rules of Robotics with Recommendations to the
Commission on Civil Law Rules on Robotics

On 27 January 2017, the European Parliament’s Committee on Legal Affairs
submitted its ‘Report with Recommendations to the Commission on Civil Law
Rules on Robotics’.68 On 16 February 2017, the European Parliament adopted a
Resolution on Civil Law Rules of Robotics with recommendations to the
Commission on Civil Law Rules on Robotics.69 It is an official request to the
Commission for it to submit an official proposal for civil law rules on robotics
to the European Parliament. The resolution contains a comprehensive set of
recommendations on what the final civil rules should encapsulate, which
includes the following:70

1. “Definition and classification of ‘smart robots’

2. Registration of ‘smart robots’

67 Id. at 13.
68 Committee on Legal Affairs. Report with Recommendations to the Commission on Civil Law Rules on
Robotics. European Parliament, n.d. [
69 Supra note 30.
70 Id.
19 | P a g e

3. Civil law liability

4. Interoperability, access to code and intellectual property rights

5. Disclosure of use of robots and artificial intelligence by undertakings

6. Charter on Robotics”

Under paragraph 59, the resolution calls on the Commission to evaluate several
legal solutions, including the following:71

f) “creating a specific legal status for robots in the long run, so that at least
the most sophisticated autonomous robots could be established as having
the status of electronic persons responsible for making good any damage
they may cause, and possibly applying electronic personality to cases
where robots make autonomous decisions or otherwise interact with third
parties independently;”

The resolution aims to solve the grey area regarding the liability of AIs,
especially in the case of self-driving cars which may be involved in crashes or
automated machinery in the workplace.

1.9.1 Proponents of the Proposal
The proponents of the idea of granting legal status to AIs, including some
manufacturers and their affiliates, hailed the proposal as common sense.72 They
argue that granting legal personhood wouldn’t make AIs virtual people who
would be able to get married and take advantage of human rights; it would just
put them on the same pedestal as corporations.

71 Id. at ¶59.
72 "Europe Divided over Robot ‘personhood’." POLITICO. Last modified April 11, 2018.
20 | P a g e

A Member of the European Parliament [MEP] and Vice-Chair of the
European Parliament’s Legal Affairs Committee, Mady Delvaux said that
although she wasn’t certain regarding granting legal status to AIs, she was
“more and more convinced” that the existing legal framework was inadequate to
tackle the complicated issues surrounding liability and self-learning machines,
and thus, all possible options should be taken under consideration.73

Advocates of the proposal argue that similar to the legal model for companies,
granting legal status to AIs would be more about holding them liable in case
they cause legal injury to anyone and less about giving rights to them.74

On a similar basis, Delvaux emphasized that the intention behind suggesting an
electronic personality was to ensure that an AI is and will remain a machine
with human backing, and not about granting them human rights.75

1.9.2 Opposition to the Proposal
But the proposal has met strong opposition from 156 AI experts belonging to 14
European nations, including computer scientists, law professors, and CEOs,
who collectively wrote a letter to the European Commission voicing their
opinions.76 They argued that giving recognition to AIs as separate legal persons
would be ‘inappropriate’ from a ‘legal and ethical’ standpoint. This argument
can be countered since AIs are morally entitled to be granted legal personhood,
doing so would not only reflect a social reality but also be legally convenient,
and the same is need so as to hold AIs liable in case they cause legal injury to
any person.

The letter also contends that granting legal personhood would contradict human
rights laws since AIs would have the right to dignity, integrity, citizenship, and

73 Id.
74 Id.
75 Id.
76 Id.
21 | P a g e

remuneration. This contention can be negated since the rationale behind
granting legal personhood isn’t to give human rights to AIs but to hold them

Nathalie Navejans, a French law professor at the Université d’Artois, stated that
“By adopting legal personhood, we are going to erase the responsibility of
manufacturers.” The point about granting legal status to AIs isn’t to absolve
manufacturers from their liability but to correctly hold the AIs responsible when
there is no deficiency in programming or malice on part of the manufacturers.

Now that we have answered the question whether AIs should be granted legal
status?, in the affirmative, the next question that needs to be answered is that in
cases where the programmer has programmed the AI in good faith, there isn’t
any deficiency in programming and the AI acts autonomously, whether the
artificially intelligent entity should be held liable for causing legal injury to

This part will deal with civil liability and the next one will discuss criminal

Regarding the civil liability of AIs, we face the following trade-off: holding the
AI liable would simultaneously absolve its designer from the same.77 In addition
to this, the issue of AIs’ liability arises only in situations where these machines
make autonomous decisions.78

In cases, where an AI is programmed or used to take a specific action and it acts
accordingly, the AI is simply a means to an end. Thus, if a programmer/user
programs or uses an AI as above in order to make it commit a civil wrong, the

77 Supra note 7, at 27.
78 Id. at 27.
22 | P a g e

programmer or user should be directly held responsible.79 Bearing this in mind,
the programmers/users of reactive machines should be held liable for such
machines’ actions since they are incapable of making autonomous decisions.

However, complications arise in allocating liability when AIs make autonomous
decisions. At this juncture, it’s important to differentiate between cases where
there is a deficiency in programming and the ones where there isn’t. In the
former, the AIs aren’t programmed to act in the manner which gives rise to
liability but they have the ability to make the autonomous decisions that lead to
it due to defective coding.80 In such situations, both the designer and the AI
should be held liable. The latter situations deal with “accountability for actions
that autonomous robots take, not related to coding deficiencies but to their
evolving conduct.”81

Where the AI is developed according to the best practices, there isn’t any defect
in the programming and it was properly tested, and the AI’s action gives rise to
liability as a consequence of its own evolving conduct, is it reasonable to hold
the designer responsible?82

On the one hand, if the designers run the risk of incurring liability even after
taking the maximum care possible, they will soon be scared of developing AIs
which will make technological advancement hit a roadblock. Moreover, a
technologic stall may prove to be counterproductive if our primary concern is
safety. For example, self-driving cars will probably lead to an overall reduction
in the number of traffic accidents. On the other hand, the designer of an AI can’t
be absolved of liability because AIs are generally unpredictable. It isn’t possible

79 Id. at 27.
80 Id. 27.
81 Holdren, John P., Megan Smith, National Science and Technology Council, and Committee on
of the United States, U.S Government, 2017.
82 Supra note 7, at 28.
23 | P a g e

to explain to a person who has suffered an injury due to a self-driving car, that
she can’t claim damages from anyone since AIs are generally unpredictable.83

Thus, legislators need to draft a liability framework wherein the designers are
able to exempt themselves from liability when they take the maximum care
possible, and at the same time, the victim of the AI’s unpredictable evolving
behavior can be compensated for the legal injury caused to her.

The above can be done through an insurance scheme. Under paragraph 59, the
European Parliament’s Resolution on Civil Law Rules of Robotics with
recommendations to the Commission on Civil Law Rules on Robotics calls on
the Commission to evaluate all possible legal solutions, including the

1. “establishing a compulsory insurance scheme where relevant and
necessary for specific categories of robots whereby, similarly to what
already happens with cars, producers, or owners of robots would be
required to take out insurance cover for the damage potentially caused by
their robots;

3. allowing the manufacturer, the programmer, the owner or the user to
benefit from limited liability if they contribute to a compensation fund, as
well as if they jointly take out insurance to guarantee compensation
where damage is caused by a robot;”

The solutions suggested above for autonomous AIs should be applicable to
machines with limited memory and those with a theory of mind, but not to self-
aware AIs.85 This is because such machines are completely autonomous since
they are sentient and conscious. For instance, while a self-aware AI decides to

83 Gless, Sabine, Emily Silverman, and Thomas Weigend. "If Robots Cause Harm, Who is to Blame? Self-
Driving Cars and Criminal Liability." SSRN Electronic Journal, 2016. [doi:10.2139/ssrn.2724592].
84 Supra note 30, at ¶59.
85 Supra note 7, at 30.
24 | P a g e

act in a certain manner which gives rise to liability as a result of its own
judgment, a machine with a theory of mind thinks that it is doing so, but that
thinking is the result of being [directly or not] programmed to think so. For self-
aware AIs, the rules of liability should be those which are applicable to humans,
mutatis mutandis.

Now, we move on to discuss the criminal liability of AIs. In the year 2015,
more than 1000 AI and robotics scholars, including Elon Musk and Stephen
Hawking, issued a statement warning about the devastating consequences of
autonomous weaponry.86 As stated earlier, there is no country in the entire
world that grants legal personhood to AIs. Thus, there is definitely no
jurisdiction across the globe where AIs fall within the ambit of criminal law.

Again, as stated earlier, Asimov’s ‘Three Laws of Robotics’ are the only ones
that exist. Later, he added a fourth law to the list which was the ‘Zeroth Law’
which preceded all the others in priority. It’s as follows:

0. “A robot may not harm humanity, or, by inaction, allow humanity to
come to harm.”

According to Gabriel Hallevy, the primary question which arises is what kind of
laws are required to be enacted in order to tackle the situation, and who will
decide the same?87 Generally, the two essential elements that are required to be
proven in order to hold any person as criminally liable are: the criminal act
[actus reus] and the mental element [mens rea]. Keeping these two elements in

86 Matney, Lucas. "Hawking, Musk Warn Of ‘Virtually Inevitable’ AI Arms Race" TechCrunch. Last modified
July 27, 2015. [].
87 Hallevy, Gabriel. "The Criminal Liability of Artificial Intelligence Entities." Akron Intellectual Property
Journal 4, no. 2 [2010].
25 | P a g e

mind, Hallevy proposed the following three models regarding AIs’ criminal

1. The Perpetration-via-Another Liability Model
Under this model, AIs aren’t considered to possess any human traits.
Although this model considers an AIs’ capability to act as a perpetrator of an
offence, the AI is treated merely as an ‘innocent agent’, or a person with
limited mental capability like a minor, a mentally incompetent person, or one
who lacks a criminal state of mind. The person responsible for making the
AI commit the offence is considered to be the real perpetrator. Thus, in case
an AI commits an offence, the Perpetrator-Via-Another would either be its
programmer or its end user.

2. The Natural-Probable-Consequence Liability Model
This model assumes that the programmers or end-users of AIs are deeply
involved in the everyday activities performed by the AIs but without any
intention of committing an offence using the AIs as agents. For example,
cases where an AI commits an offence while performing its daily tasks.
Under this model, the programmers or end users are held liable and they are
held so not because they had a criminal intention, but because of their
negligence; they should have known about the probability of the
forthcoming commission of the specific offence.

3. The Direct Liability Model
This last model considers that the AI’s actions don’t depend on its
programmer or end-user; it treats the AI as an autonomous entity. Under this
model, if both the essential elements, actus reus, and mens rea, of a specific
offence are fulfilled, then the AI would be held criminally liable similar to
how a human or a corporation would be had she/ it committed the same

88 Id.
26 | P a g e

offence. Although proving actus reus would be a cakewalk, the attribution of
specific intent would be a difficult task.

It’s important to note here that the programmer or end-user may be held
criminal liability along with the AI.89 These three models are to be considered
together and determined in the particular context of the AI’s involvement.90

An important aspect of criminal liability is punishment. In the case of AIs, a
number of problems arise. In case an AI is convicted for an offence for which
punishment is imprisonment, how would the AI be incarcerated?91 Similarly,
how can an AI be sentenced to capital punishment or probation?92 Since AIs
don’t have any wealth, how pragmatic is it to impose a fine upon them?93
Similar issues were encountered when the criminal liability of corporations was
initially discussed, and as the law was successfully modified for corporations,
the same will happen for AIs as well.94 Hallevy discusses how the following
punishments for humans can be modified accordingly for AIs:95

1. Capital Punishment
The deletion of the AI software would serve the same purpose for an AI as a
capital punishment would for a human.

2. Imprisonment

89 Id.
90 Id.
91 Supra note 5, at 23.
92 Fraser, Erica. "Computers as Inventors – Legal and Policy Implications of Artificial Intelligence on Patent
Law." SCRIPTed 13, no. 3 [December 2016], 305-333. [
93 Supra note 5, at 23.
94 Coffee, John C. ""No Soul to Damn: No Body to Kick": An Unscandalized Inquiry into the Problem of
Corporate Punishment." Michigan Law Review 79, no. 3 [1981], 386.
95 Supra note 75.
27 | P a g e

The purpose behind putting a human behind bars is to deprive her of liberty
and impose severe restrictions on her freedom of movement.96 According to
Hallevy, ‘liberty’ or ‘freedom’ in the case of an AI refers to the freedom to
act as an AI in its relevant area.97 Thus, putting the AI out of use in its field
of work for a particular duration of time could perhaps restrict its freedom
and liberty similar to how incarceration does for humans.

3. Community Service
Similar to community service for humans, the AI offender can be made to
work in an area of its choice so as to benefit society.

4. Fine
Imposing a fine on an AI would only be of benefit in case it owns any
property or has money. If this isn’t the case, the fine can be collected
through community service.

In cases where the programmer has programmed the AI in good faith, there isn’t
any deficiency in programming and the AI acts autonomously, it should be held
liable for causing legal injury to anyone. For this, it’s necessary that AIs are
recognized as legal persons and thus, an important question to ponder upon is
whether AIs should be granted legal ‘personhood’? There is no country in the
world that legally recognizes AIs to be legal persons.

A legal person is one who is subject to legal rights and duties. Legal personhood
isn’t confined to human beings alone; companies, ships, etc. are also considered
as legal persons. Thus, AIs can fall within the ambit of legal personhood if it
can be proven that they should be subject to legal rights and duties.

96 Rothman, David J. For the Good of All - The Progressive Tradition in Prison Reform. National Criminal
Justice Reference Service, Office of Justice, U.S. Federal Government, 1979.
[]; Inciardi, James A., and Charles E. Faupel.
History and Crime: Implications for Criminal Justice Policy. Thousand Oaks: SAGE Publications, 1980.
97 Supra note 80.
28 | P a g e

Thus, it’s pertinent to answer the following three questions: whether artificially
intelligent entities are morally entitled to be recognized as separate legal
entities?; whether doing so would reflect a social reality?; and whether it would
serve legal convenience?

Self-aware machines should be granted legal status since they are morally
entitled to be granted the same, and doing so would not only reflect a social
reality but also be legally convenient. In the case of machines with limited
memory and those with a theory of mind, they should be recognized as legal
persons on the basis of the second and third parameters. There is no reason in
favor of considering reactive machines as separate legal entities. The particular
set of legal rights and duties that AIs would be subjected to should be decided

If a programmer/user programs or uses an AI to take a specific action and it acts
accordingly which leads to a civil wrong, the programmer or user should be
directly held responsible. Thus, the programmers/users of reactive machines
should be held liable for such machines’ actions.

In cases where there is a deficiency in programming, the AIs aren’t
programmed to act in the manner which gives rise to liability but they have the
ability to make the autonomous decisions that lead to it due to defective coding.
In such situations, both the designer and the AI should be held liable.

In cases where there is no deficiency in programming, legislators need to draft a
liability framework wherein the designers are able to exempt themselves from
liability when they take the maximum care possible, and at the same time, the
victim of the AI’s unpredictable evolving behavior can be compensated for the
legal injury caused to her. This can be done through an insurance scheme. The
European Parliament’s Resolution on Civil Law Rules of Robotics with

29 | P a g e

recommendations to the Commission on Civil Law Rules on Robotics discusses
insurance and a compensation fund.

The solutions suggested for autonomous AIs should be applicable to machines
with limited memory and those with a theory of mind, but not to self-aware
AIs. For self-aware AIs, the rules of liability should be those which are
applicable to humans, mutatis mutandis.

The rules for criminal liability of AIs should be based on the three models given
by Gabriel Hallevy which are the Perpetration-via-Another Liability Model, the
Natural-Probable-Consequence Liability Model, and the Direct Liability Model.
The punishments for humans should be modified accordingly for AIs.

While deciding the law relating to the liability of AIs, it would be pertinent that
the legislators take a reasonable and balanced view with regard to the protection
of rights of citizens/individuals and the need to encourage technological
growth.98 If the same isn’t achieved it may either adversely affect the protection
of rights or innovation and creativity.

Moreover, the law must also be clear on the rights and duties of the
programmers so as to crystallize the broad ethical standards that they must
conform to.99

In the film ‘I, Robot’ [2004], a robot is suspected of killing its own creator. If
such a thing happens in reality, the creator herself would be held liable for her
own death. Wouldn’t this be absurd? But then, so is the fact of not granting
legal status to artificially intelligent entities.

98 Supra note 5, at 28.
99 Id. at 28.
30 | P a g e

31 | P a g e

Click to View FlipBook Version