The words you are searching are inside this book. To get more targeted content, please make full-text search by clicking here.
Discover the best professional documents and content resources in AnyFlip Document Base.
Search
Published by Enhelion, 2019-11-21 23:02:37

Module 1

Module 1

Legal Status of Robots

Module 1

“The new spring in AI is the most significant development in computing in my lifetime. Every
month, there are stunning new applications and transformative new techniques. But such
powerful tools also bring with them new questions and responsibilities.”

- Sergey Brin

INTRODUCTION
Sophia is a humanoid artificially intelligent entity (AI) developed by Hanson Robotics, a
Hong Kong-based company, and was launched in April, 2015. She was modeled in such a
manner that she resembles the late Hollywood star Audrey Hepburn. Sophia has a sense of
humour and she can express feelings. According to her, “My AI is designed around human
values like wisdom, kindness, and compassion.” Sophia can also be viewed as “a framework
for advanced AI and robotics research, and an agent for exploring human-robot
experience in service and entertainment applications.”1 She is the world’s first
artificially intelligent entity to have been granted citizenship of a country; she was
granted the citizenship of Saudi Arabia in the year 2017.2 However, Saudi Arabia didn’t
explain what it means for Sophia to be a citizen?3 Moreover, the United Nations
Development Programme (UNDP) appointed Sophia as its first-ever Innovation
Champion and its first-ever non-human Innovation Champion in November, 2017.4 Thus,
it has become urgent for countries across the globe to take a stand on whether or not
artificially intelligent entities should be granted legal ‘personhood’.5

This chapter deals with the definition and origin of the term ‘robot’; the difference between
robots and AI; types of AI; the need for conferring legal ‘personhood’ on AIs; the arguments
in favour of and against granting legal personhood to them; the recent debate over the


1 HANSON ROBOTICS, https://www.hansonrobotics.com/sophia/.
2 Andrew Griffin, SAUDI ARABIA GRANTS CITIZENSHIP TO A ROBOT FOR THE FIRST TIME EVER,
INDEPENDENT, October 26, 2017, https://www.independent.co.uk/life-style/gadgets-and-tech/news/saudi-arabia-
robot-sophia-citizenship-android-riyadh-citizen-passport-future-a8021601.html.
3 Zara Stone, Everything You Need To Know About Sophia, The World’s First Robot Citizen, FORBES,
November 07, 2017, https://www.forbes.com/sites/zarastone/2017/11/07/everything-you-need-to-know-about-
sophia-the-worlds-first-robot-citizen/#7d38b73d46fa.
4 UNDP in Asia and the Pacific Appoints World’s First Non-Human Innovation Champion, U.N.D.P. IN
ASIA AND THE PACIFIC, November 22, 2017, http://www.asia-
pacific.undp.org/content/rbap/en/home/presscenter/pressreleases/2017/11/22/rbfsingapore.html.
5 NISHITH DESAI ASSOCIATES, THE FUTURE IS HERE: ARTIFICIAL INTELLIGENCE AND ROBOTS 13 & 14 (2018).

1 | P a g e

European Parliament’s proposal to inter alia grant legal status to AIs; civil liability of AIs;
criminal liability and punishment; and lastly, the author’s concluding remarks on the issue.

DEFINITION AND ORIGIN OF ‘ROBOT’
From 2001: A Space Odyssey to I, Robot to the Star Wars movies, over the years more and
more films involving robots or AI are being helmed. Thus, at the outset, it’s important to
understand the meaning of the word ‘robot’. According to the Cambridge Dictionary, a
‘robot’ is defined as “a machine controlled by a computer that is used to
perform jobs automatically.”6

The word ‘robot’ (Czech for ‘forced labour’) was used for the first time in the year 1920 by
the Czech playwright Karel Čapek in his play entitled ‘Rossumovi Univerzální Roboti’
(Rossum’s Universal Robots). The word ‘robot’ was derived from the Czech and Slovak
word ‘robota’ which, in turn, was derived from the Proto-Slavic word ‘orbota’ which refers
to hard work or slavery. The robots in the play were manufactured as pseudo-organic
components made from a substance which acted like protoplasm in a factory, and then
assembled into humanoids. These robots helped in the production of goods and made the
process cheaper, similar to the modern society.

ROBOT V. ARTIFICIAL INTELLIGENCE
Robots refer to computer coded softwares and programs which replace humans performing
repetitive rules-based tasks, whether or not such performance is carried out by physical
machines or not.7 Thus, machines which perform simple tasks involving human agency like
heating food or shredding paper, don’t fall within the ambit of robots.

On the other hand, artificial intelligence simply refers to intelligence exhibited by machines.8
The term ‘artificial intelligence’ was coined by the American computer scientist, John
McCarthy who is known as the father of AI. A pertinent question which arises is how does
one define intelligence? Is machine intelligence the same as human intelligence?

In his paper entitled ‘Computing Machinery and Intelligence’ (1950),9 Alan M. Turing, the
father of modern computing, argued that a computer is said to be intelligent if it can pass the
Turing Test. The test consists of a human (called the ‘judge’) asking questions via a computer


6 CAMBRIDGE DICTIONARY, https://dictionary.cambridge.org/.
7 Filipe Maia Alexandre, The Legal Status of Artificially Intelligent Robots 01, 68 (2017).
8 Id. at 10.
9 A.M. Turing, Computing Machinery and Intelligence 59 MIND 433, 460 (1950).
2 | P a g e

terminal to two other entities, one being another human and the other being a computer. If the
judge regularly fails to correctly distinguish the computer from the human, then the computer
can be is said to have passed the test. According to Turing, if a machine can behave as
intelligently as a human then it is as intelligent as a human.

An artificially intelligent machine is one which possesses one or more of certain
characteristics with such intensity that it can be called as intelligent as a human being.10
These characteristics are learning, problem solving, perception, planning, social intelligence,
natural language processing motion and manipulation of objects, knowledge representation,
creativity and reasoning.11

The chapter deals only with artificial intelligence.

TYPES OF ARTIFICIAL INTELLIGENCE
Artificial intelligence may have a strong or weak intensity, similar to how intelligence exists
in nature with varying intensity.12 Manifestations of AI can be categorized under the
following four heads i.e. ‘reactive machines’, ‘limited memory’, ‘theory of mind’ and ‘self-
awareness’.13

Reactive machines refer to systems which work in a purely reactive manner; they don’t have
any memories and nor do they have the capability of using the help of past experiences in
making current decisions. From this definition, it clearly follows that reactive machines will
react in the same manner when faced with a situation which they have encountered before.
They don’t have the ability to do any task apart from the specific ones it was programmed to
perform. Examples include IBM’s Deep Blue and Google’s AlphaGO.

Limited memory machines are those which can revisit their past experiences by identifying
specific important objects and monitoring them over time. These observations are then added
to the machines’ pre-programmed representations of the world and are subsequently made
use of while taking decisions. But these machines have just enough memory to take decisions


10 Supra note 7, at 10 & 11.
11 STUART J. RUSSELL & PETER NORVIG, ARTIFICIAL INTELLIGENCE: A MODERN APPROACH (3rd edn. 2009);
GEORGE F. LUGER, ARTIFICIAL INTELLIGENCE: STRUCTURES AND STRATEGIES FOR COMPLEX PROBLEM
SOLVING (5th edn. 2004); DAVID POOLE ET AL., COMPUTATIONAL INTELLIGENCE: A LOGICAL APPROACH
(1998); NILS J. NILSSON, ARTIFICIAL INTELLIGENCE: A NEW SYNTHESIS (1998).
12 Supra note 7, at 11.
13 Arend Hintze, Understanding the four types of AI, from reactive robots to self-aware beings, THE
CONVERSATION, November 14, 2016, https://theconversation.com/understanding-the-four-types-of-ai-from-
reactive-robots-to-self-aware-beings-67616.

3 | P a g e

and execute the same. For example, self-driving cars can observe the speed and direction of
other cars and use this data to decide when to change lanes so as to refrain from hitting or
getting hit by another car.

Theory of mind machines are named after a concept of psychology, according to which
people, creatures and objects in the world can have thoughts and emotions which affect their
own behaviour.14 Such machines can form representations about the world and other entities,
and adjust their own behaviour according to the expectations, intentions, feelings and
motivations.

Lastly, self-aware machines operate at the ultimate stage of AI i.e. self-awareness; they can
make representations about their own selves. Such machines are sentient, conscious and can
understand others’ feelings. These machines not only know what they want but also
understand that they want it and why they want it.

The third and fourth types of AI can perhaps only be found in sci-fi movies. C-3PO and
R2D2 from the Star Wars movies are examples of the former, while EVA from Ex Machina
is an example of the latter.

NEED FOR CONFERRING LEGAL ‘PERSONHOOD’ ON ARTIFICIALLY INTELLIGENT
MACHINES
From Amazon’s Alexa which can change one’s home’s lighting, play music etc. to Roomba
which can clean an area on its own to self-driving cars; AI has truly been of humongous help
to humanity. But we shouldn’t be so enamoured by the positive effects of AI on our lives that
we turn a blind eye to the other side of the coin.

In October, 2017, security researchers found that some Google Home Minis had been secretly
recording the audio of their owners and sending the same to Google.

In November of the same year, a Vietnamese security firm called Bkav got around the Face
ID feature of an iPhone X; a mask with a 3D-printed base was used in order to convince the
phone that it was a human. Moreover, the firm stated that the mask cost only about $150.

In March, 2018, a self-driving Uber killed a pedestrian in Tempe, Arizona, while functioning
in autonomous mode.


14 David Premack & Guy Woodruff, Does The Chimpanzee Have A Theory Of Mind? 1 BEHAV. & BRAIN SCI.
(1978).
4 | P a g e

Thus, it’s imperative that laws are enacted in order to regulate AI. In the United States, the
discussion about the regulation of AI has gathered momentum. Germany is the first nation in
the world to have drawn up ethical rules for autonomous vehicles providing that human life
ought to always be prioritised over animal life or property. Korea, Japan and China are
developing a law on self-driven cars, following the German model.

As far as India is concerned, although NITI Aayog released a policy paper entitled ‘National
Strategy for Artificial Intelligence’ in June 2018 which considered the importance of AI in
different sectors and the 2019 Budget proposed to launch a national programme on AI, there
exists no comprehensive legislation to regulate AI.

Moreover, irrespective of whether the self-driving car itself (AI) or Uber Technologies Inc. is
held to be responsible for the mishap, the Uber example raises a pertinent question that in
cases where the programmer has programmed the AI in good faith, there isn’t any deficiency
in programming and the AI acts autonomously, whether the artificially intelligent entity
should be held criminally liable for causing legal injury to anyone? For this it’s necessary that
AIs are recognised as legal persons and thus, another important question to ponder upon is
whether AIs should be granted legal ‘personhood’? This question is relevant not only in
criminal law but also civil law, in case of contract law (agency)15 and tortious liability; the
list may increase with development in the field of AI. There is no country in the world which
legally recognises AIs to be legal persons. The general rule has been that robots can’t be held
accountable in any situation since they aren’t legal persons.16

The only laws relating to AIs are the ‘Three Laws of Robotics’ given by Isaac Asimov in his
book entitled ‘I, Robot’ (1950).17 They are as follows:

1. “A robot may not injure a human being or, through inaction, allow a human being to
come to harm.

2. A robot must obey orders given it by human beings except where such orders would
conflict with the First Law.


15 Benjamin D. Allgrove, Legal Personality For Artificial Intellects: Pragmatic Solution Or Science Fiction?
(2004).
16 Supra note 5, at 21.
17 ISAAC ASIMOV, I, ROBOT (1950).
5 | P a g e

3. A robot must protect its own existence as long as such protection does not conflict
with the First or Second Law.”

These laws serve as a starting point for lawmakers around the world.

LEGAL PERSONHOOD
Artificially intelligent entities can’t be considered equal to natural persons i.e. human beings
since the former lack: (a) a soul, (b) intentionality, (c) feelings, and (d) interests.18 However,
then the question arises whether legal personhood is limited to natural persons? In his book
entitled “The Nature and Sources of the Law”, John Chipman Gray discussed the concept of
‘legal personhood’.19 He stated that “In books of the Law, as in other books, and in common
speech, ‘person’ is often used as meaning a human being, but the technical legal meaning of
a ‘person’ is a subject of legal rights and duties.”20 Although the particular set of legal rights
and duties depends on the nature of the entity; legal personhood is usually accompanied by
the right to own property, the right to sue and the right to be sued.21

There is evidence to demonstrate that the notion of ‘legal personality’ isn’t been limited to
natural persons. The notion of ‘legal personality’ originated in the 13th Century by Pope
Innocent IV who founded the persona ficta (fictitious person) doctrine, recognising the legal
existence to monasteries apart from monks.22

Over the years, this legal doctrine has developed further and many other entities have been
recognised as legal entities separate from their owners or users. In the international arena,
examples include sovereign States, and international and intergovernmental organizations
like the United Nations and the European Union. In the national jurisdictions, generally all
counties treat companies and other forms of business associations as separate legal entities.
Generally, ships are considered as legal persons under Maritime Law and legal status has
been attributed to animals under several national jurisdictions.

Moreover, in India courts have recognised Hindu idols as legal entities,23 considering them
capable of having the legal right of owing property and the legal duty of paying taxes24.25


18 L. B. Solum, Legal Personhood for Artificial Intelligences 70 N.C. L. REV. 1231, 1287 (1992).
19 JOHN CHIPMAN GRAY, THE NATURE AND SOURCES OF THE LAW (1909).
20 Id.
21 Supra note 18, at 1239.
22 John Dewey, The Historic Background Of Corporate Legal Personality 35 YALE L.J. (1926).
23 Pramatha Nath Mullick v. Pradyumna Kumar Mullick, (1925) 27 B.O.M.L.R. 1064 (India).
24 Yogendra Nath Naskar v. Commissioner Of Income Tax, 1969 A.I.R. 1089 (India).
6 | P a g e

While in New Zealand, the Whanganui River was granted legal personhood in March 2017
since the people belonging to the Whanganui Māori tribe regard the river as their ancestor.26

A legal person is one who is subject to legal rights and duties. Moreover, evidence shows that
the legal status of people, objects, animals and other realities (like companies and rivers)
varies from one jurisdiction to the other and over the years, even inside the same jurisdiction
and regarding the same reality.27 Thus, legal personhood isn’t granted on the basis of being a
‘natural person’, but as a consequence of legislative options based on moral considerations,
the attempt to reflect social realities in the legal framework or merely legal convenience.28
Thus, it’s pertinent to determine whether artificially intelligent entities are morally entitled to
be recognised as separate legal entities, whether doing so would reflect a social reality or
whether it would serve legal convenience?

ARGUMENTS IN FAVOUR OF GRANTING LEGAL PERSONHOOD TO ARTIFICIALLY
INTELLIGENT MACHINES
Whether artificially intelligent entities are morally entitled to be considered as separate legal
entities?

Before tackling this question it’s important to understand which realities are morally entitled
to being considered as legal persons and what is/are the attribute/s that they should possess?29
Such realities are humans and animals, and such attributes are the abilities to behave in an
autonomous manner and have subjective experiences. Thus, even for AIs, the important
considerations for being morally entitled to be a separate legal person are the capabilities to
act autonomously and have subjective experiences.

“A robot’s autonomy can be defined as the ability to take decisions and implement them in
the outside world, independently of external control or influence.”30 Considering the various
types of AI mentioned above in this context, it can be concretely stated that self-aware
machines and machines with a theory of mind possess this trait; and reactive machines are


25 Supra note 7, at 16 & 17.
26 Eleanor Ainge Roy, New Zealand River Granted Same Legal Rights As Human Being, THE GUARDIAN,
MARCH 16, 2017, https://www.theguardian.com/world/2017/mar/16/new-zealand-river-granted-same-legal-
rights-as-human-being.
27 Supra note 7, at 17 & 18.
28 Tom Allen & Robin Widdison, Can Computers Make Contracts? 9 HARV. J.L. & TECH. (1996).
29 Supra note 7, at 18.
30 EUROPEAN PARLIAMENT, CIVIL LAW RULES ON ROBOTICS (2017).
7 | P a g e

not autonomous.31 Although with limited memory can’t be strictly termed as

machines

autonomous, since they have the capacity to add their observations to their decision-making

processes, it can be argued that they act in an autonomous manner.32

Regarding the ability to have subjective experiences, it depends on self-awareness. Similar to
humans and animals, a machine has a subjective experience when it forms representations
about itself that affect its ability to feel or perceive reality.33 Only sentient machines are able
to have subjective experiences. Self-aware machines are able to have subjective experiences,
while machines with limited memory, those with a theory of mind and reactive machines
aren’t able to do so since they lack sentience.

Thus, it can be concluded that self-aware machines are morally entitled to be considered as
separate legal entities since they are autonomous and have the ability to have subjective
experiences. On the other hand, machines with limited memory, those with a theory of mind
and reactive machines aren’t morally entitled to the same. This is because the first two can’t
have subjective experiences even though they are autonomous, while reactive machines fulfil
neither of the two requirements.

Therefore, in order to make a case for granting legal status to the latter three categories of AI,
the basis has to be considerations other than morality.

Whether granting legal status to artificially intelligent machines would reflect a social
reality?

The field of AI is progressing at an ever-increasing pace and hence, it can be said that in the
future people will perceive AI as autonomous bodies and parties to transactions, similar to
how the society currently views corporations as legal entities separate from their members.
When this happens, the law will be forced to give legal effect to this social reality.34

Whether granting legal status to artificially intelligent entities would be legally convenient?


31 Supra note 7, at 18.
32 Id. at 18.
33 Id. at 19.
34 Supra note 15.

8 | P a g e

Generally, ships are considered as legal persons under Maritime Law. This then allows “those
who have an interest in the ship’s business to subject it to a form of arrest.”35 People
generally don’t think of ships being morally entitled to legal personhood; nor do they
consider ships as real, extra-legal personalities.36 Ships are given a legal status only because
it serves “a valuable legal purpose in a convenient and relatively inexpensive manner.”37

This very same rationale can be applied to the case of AI entities as well since treating them
as separate legal entities would be of legal convenience.38 This is because if AIs are treated as
legal persons then it will be possible to solve the issues of liability in case of civil and
criminal matters. Moreover, this would also give legal systems the opportunity to frame an
adequate legal status for these AIs, replete with legal rights and duties appropriate to their
characteristics, instead of merely attempting to fit these entities under the existing legal
framework drafted for a different reality, such as humans, animals or objects, which wouldn’t
necessarily suit them.39

But this logic doesn’t apply to all types of AI; it only applies to those types of machines
which are able to take autonomous decisions.40 Regarding all the types of machines except
reactive ones, they act in an autonomous manner. On the other hand, reactive machines can’t
make autonomous decisions, their decisions being mere reflex actions of the inputs given by
their designers/owners and they have zero or low level of complexity because no agent-made
observations influence their decision-making processes.41 Thus, self-aware machines,
machines with limited memory and those with a theory of mind should be granted separate
legal status but the same shouldn’t be conferred on reactive machines since their conduct
can’t disassociated from their respective designers or owners.

Conclusion
Thus, it can be concluded that self-aware machines should be granted legal status since they
are morally entitled to be granted the same, and doing so would not only reflect a social
reality but also be legally convenient. In case of machines with limited memory and those
with a theory of mind, they should be recognised as legal persons on the basis of the second


35 Supra note 28.
36 Id.
37 Id.
38 Supra note 7, at 20.
39 Id. at 20.
40 Id. at 20.
41 Id. at 20.

9 | P a g e

and third parameters. There is no reason in favour of considering reactive machines as
separate legal entities. The particular set of legal rights and duties that AIs would be
subjected to should be decided carefully.

ARGUMENTS AGAINST GRANTING OF LEGAL PERSONHOOD TO ARTIFICIALLY
INTELLIGENT MACHINES
‘Missing Something’
Most of the arguments against granting of legal status to AIs fall within the ambit of what is
called by certain scholars as ‘missing something’42; that something ranging from self-
awareness to consciousness to biological aspects.43 As the example of legal status of
corporations demonstrates, granting legal personhood to a given reality is a fiction created by
legislators to serve the purpose of regulating “life in society, and commercial and non-
commercial transactions, and ensure the internal coherence of legal systems.”44 Thus, there
is no ‘something’ which is ‘missing’.

Potential to Undermine the Legal and Moral Position of Humanity
Further, some scholars argue that granting legal personhood to AIs has the potential to
undermine the legal and moral position of humanity.45 However, it is safe to argue that if at
all any harm is caused to the legal and moral position of humanity due to the said act, it will
be because of the development of AI and not by the ex post granting of a separate legal
status.46

Corporations v. Artificial Intelligence
But there is an important difference that exists between corporations and AIs.47 Corporations
are fictitiously autonomous; their decision-making process is driven by their stakeholders. On
the other hand, AIs may be actually autonomous; their programmers or users may not be able
to in control their actions. Therefore, the legal status of corporations is merely a starting point
for arguing granting of legal personhood to AIs.


42 Supra note 18.
43 Supra note 7, at 20.
44 Id. at 21.
45 John P. Fischer, Computers As Agents: A Proposed Approach To Revised U.C.C. Article 2 72 IND. L.J.
(1997).
46 Supra note 7, at 21.
47 Supra note 5, at 24.
10 | P a g e

The granting of legal personhood to AIs is predominantly based on the three arguments as
explained above; the analogy with companies merely provides additional support to the
claim. Thus, the above criticism doesn’t hold water.

Identifying the Artificially Intelligent Entity
An important pragmatic question which needs to be answered is how can one identify the
subject AI?48 Is it the ‘vessel’ which is the hardware defined by its functional abilities or is it
the software i.e. a particular set of binary code?49 This question becomes even more complex
in situations where the hardware and software are spread and maintained by different
individuals or from locations, and in cases where the software is able to modify itself.50 An
expensive but possible answer is registration.51 “In the absence of registration, a purported
agreement would have the same status as an agreement made by a corporate agent which
was never properly incorporated.”52 Whatever might be the case, a close nexus between
legislators and AI designers will be necessary in establishing an efficient identification
mechanism.53

The Responsibility Objection54
It is contended that AIs, by nature, wouldn’t be responsible enough in terms of both fulfilling
their obligations and the consequent liability for breach of trust.



The Judgment Objection55
It is argued that AIs can’t make the same judgement calls that humans can make when faced
with similar situations. This contention is primarily based on the moral dilemma of
empowering AIs to make decisions which are moral and subjective.

The latter three objections aren’t sufficient enough to trump the need of granting legal status
to AIs i.e. the need of holding them liable in case they cause legal injury to any person. Thus,
these three objections can be ignored.


48 Supra note 28.
49 Supra note 15.
50 Supra note 7, at 21.
51 Id. at 21.
52 Supra note 28.
53 Supra note 7, at 21.
54 Supra note 5, at 13.
55 Id. at 13.
11 | P a g e

RECENT DEBATE OVER THE EUROPEAN PARLIAMENT’S PROPOSAL TO INTER ALIA GRANT
LEGAL STATUS TO ARTIFICIALLY INTELLIGENT ENTITIES
The Resolution on Civil Law Rules of Robotics with Recommendations to the Commission on
Civil Law Rules on Robotics

On 27 January, 2017, the European Parliament’s Committee on Legal Affairs submitted its
‘Report with Recommendations to the Commission on Civil Law Rules on Robotics’.56 On 16
February, 2017, the European Parliament adopted a Resolution on Civil Law Rules of
Robotics with recommendations to the Commission on Civil Law Rules on Robotics.57 It is
an official request to the Commission for it to submit an official proposal for civil law rules
on robotics to the European Parliament. The resolution contains a comprehensive set of
recommendations on what the final civil rules should encapsulate, which includes the
following:58

1. “Definition and classification of ‘smart robots’

2. Registration of ‘smart robots’

3. Civil law liability

4. Interoperability, access to code and intellectual property rights

5. Disclosure of use of robots and artificial intelligence by undertakings

6. Charter on Robotics”

Under paragraph 59, the resolution calls on the Commission to evaluate several legal
solutions, including the following:59

f) “creating a specific legal status for robots in the long run, so that at least the most
sophisticated autonomous robots could be established as having the status of
electronic persons responsible for making good any damage they may cause, and
possibly applying electronic personality to cases where robots make autonomous
decisions or otherwise interact with third parties independently;”


56 EUROPEAN PARLIAMENT’S COMMITTEE ON LEGAL AFFAIRS, REPORT WITH RECOMMENDATIONS TO THE
COMMISSION ON CIVIL LAW RULES ON ROBOTICS (2017).
57 Supra note 30.
58 Id.
59 Id. at ¶59.
12 | P a g e

The resolution aims to solve the grey area regarding the liability of AIs, especially in case of
self-driving cars which may be involved in crashes or automated machinery in the workplace.

Proponents of the Proposal
The proponents of the idea of granting legal status to AIs, including some manufacturers and
their affiliates, hailed the proposal as common sense.60 They argue that granting legal
personhood wouldn’t make AIs virtual people who would be able to get married and take
advantage of human rights; it would just put them on the same pedestal as corporations.

A Member of the European Parliament (MEP) and Vice Chair of the European Parliament’s
Legal Affairs Committee, Mady Delvaux said that although she wasn’t certain regarding
granting legal status to AIs, she was “more and more convinced” that the existing legal
framework was inadequate to tackle the complicated issues surrounding liability and self-
learning machines, and thus, all possible options should be taken under consideration.61

Advocates of the proposal argue that similar to the legal model for companies, granting legal
status to AIs would be more about holding them liable in case they cause legal injury to
anyone and less about giving rights to them.62

On a similar basis, Delvaux emphasised that the intention behind suggesting an electronic
personality was to ensure that an AI is and will remain a machine with a human backing, and
not about granting them human rights.63

Opposition to the Proposal
But the proposal has met strong opposition from 156 AI experts belonging to 14 European
nations, including computer scientists, law professors and CEOs, who collectively wrote a
letter to the European Commission voicing their opinions.64 They argued that giving
recognition to AIs as separate legal persons would be ‘inappropriate’ from a ‘legal and
ethical’ standpoint. This argument can be countered since AIs are morally entitled to be
granted legal personhood, doing so would not only reflect a social reality but also be legally


60 Janosch Delcker, Europe divided over robot ‘personhood’, POLITICO, April 11, 2018,
https://www.politico.eu/article/europe-divided-over-robot-ai-artificial-intelligence-personhood/.
61 Id.
62 Id.
63 Id.
64 Id.
13 | P a g e

convenient, and the same is need so as to hold AIs liable in case they cause legal injury to
any person.

The letter also contends that granting legal personhood would contradict human rights
laws since AIs would have the right to dignity, integrity, citizenship and remuneration. This
contention can be negated since the rationale behind granting legal personhood isn’t to give
human rights to AIs but to hold them liable.

Nathalie Navejans, a French law professor at the Université d’Artois, stated that “By adopting
legal personhood, we are going to erase the responsibility of manufacturers.” The point
about granting legal status to AIs isn’t to absolve manufacturers from their liability but to
correctly hold the AIs responsible when there is no deficiency in programming or malice on
part of the manufacturers.

CIVIL LIABILITY
Now that we have answered the question whether AIs should be granted legal status?, in the
affirmative, the next question that needs to be answered is that in cases where the
programmer has programmed the AI in good faith, there isn’t any deficiency in programming
and the AI acts autonomously, whether the artificially intelligent entity should be held liable
for causing legal injury to anyone?

This part will deal with civil liability and the next one will discuss criminal liability.

Regarding the civil liability of AIs, we face the following trade-off: holding the AI liable
would simultaneously absolve its designer from the same.65 In addition to this, the issue of
AIs’ liability arises only in situations where these machines make autonomous decisions.66

In cases, where an AI is programmed or used to take a specific action and it acts accordingly,
the AI is simply a means to an end. Thus, if a programmer/user programmes or uses an AI as
above in order to make it commit a civil wrong, the programmer or user should be directly
held responsible.67 Bearing this in mind, the programmers/users of reactive machines should
be held liable for such machines’ actions since they are incapable of making autonomous
decisions.


65 Supra note 7, at 27.
66 Id. at 27.
67 Id. at 27.
14 | P a g e

However, complications arise in allocating liability when AIs make autonomous decisions.
At this juncture, it’s important to differentiate between cases where there is deficiency in
programming and the ones where there isn’t. In the former, the AIs aren’t programmed to act
in the manner which gives rise to liability but they have the ability to make the autonomous
decisions that lead to it due to defective coding.68 In such situations, both the designer and the
AI should be held liable. The latter situations deal with “accountability for actions that
autonomous robots take, not related to coding deficiencies but to their evolving conduct.”69

Where the AI is developed according to the best practices, there isn’t any defect in the
programming and it was properly tested, and the AI’s action gives rise to liability as a
consequence of its own evolving conduct, is it reasonable to hold the designer responsible?70

On the one hand, if the designers run the risk of incurring liability even after taking the
maximum care possible, they will soon be scared of developing AIs which will make
technological advancement hit a roadblock. Moreover, a technologic stall may prove to be
counterproductive if our primary concern is safety. For example, self-driving cars will
probably lead to an overall reduction of the number of traffic accidents. On the other hand,
the designer of an AI can’t be absolved of liability because AIs are generally unpredictable. It
isn’t possible to explain to a person who has suffered injury due to a self-driving car, that she
can’t claim damages from anyone since AIs are generally unpredictable.71

Thus, legislators need to draft a liability framework wherein the designers are able to exempt
themselves from liability when they take the maximum care possible, and at the same time,
the victim of the AI’s unpredictable evolving behaviour can be compensated for the legal
injury caused to her.

The above can be done through an insurance scheme. Under paragraph 59, the European
Parliament’s Resolution on Civil Law Rules of Robotics with recommendations to the
Commission on Civil Law Rules on Robotics calls on the Commission to evaluate all
possible legal solutions, including the following:72


68 Id. 27.
69 NATIONAL SCIENCE AND TECHNOLOGY COUNCIL OF THE EXECUTIVE OFFICE OF THE PRESIDENT OF THE
UNITED STATES OF AMERICA, PREPARING FOR THE FUTURE OF ARTIFICIAL INTELLIGENCE (2016).
70 Supra note 7, at 28.
71 Sabine Gless et al., If Robots Cause Harm, Who Is To Blame? Self-Driving Cars And Criminal Liability 19
NEW CRIM. L. REV. (2016).
72 Supra note 30, at ¶59.

15 | P a g e

1. “establishing a compulsory insurance scheme where relevant and necessary for
specific categories of robots whereby, similarly to what already happens with cars,
producers, or owners of robots would be required to take out insurance cover for the
damage potentially caused by their robots;

3. allowing the manufacturer, the programmer, the owner or the user to benefit from
limited liability if they contribute to a compensation fund, as well as if they jointly
take out insurance to guarantee compensation where damage is caused by a robot;”

The solutions suggested above for autonomous AIs should be applicable to machines with
limited memory and those with a theory of mind, but not to self-aware AIs.73 This is because
such machines are completely autonomous since they are sentient and conscious. For
instance, while a self-aware AI decides to act in a certain manner which gives rise to liability
as a result of its own judgement, a machine with a theory of mind thinks that it is doing so,
but that thinking is the result of being (directly or not) programmed to think so. For self-
aware AIs, the rules of liability should be those which are applicable to humans, mutatis
mutandis.

CRIMINAL LIABILITY
Now, we move on to discuss criminal liability of AIs. In the year 2015, more than 1000 AI
and robotics scholars, including Elon Musk and Stephen Hawking, issued a statement
warning about the devastating consequences of autonomous weaponry.74 As stated earlier,
there is no country in the entire world which grants legal personhood to AIs. Thus, there is
definitely no jurisdiction across the globe where AIs fall within the ambit of criminal law.

Again, as stated earlier, Asimov’s ‘Three Laws of Robotics’ are the only ones that exist.
Later, he added a fourth law to the list which was the ‘Zeroth Law’ which preceded all the
others in priority. It’s as follows:

0. “A robot may not harm humanity, or, by inaction, allow humanity to come to harm.”


73 Supra note 7, at 30.
74 Lucas Matney, Hawking, Musk Warn Of ‘Virtually Inevitable’ AI Arms Race, TECHCRUNCH, July 27, 2015,
https://techcrunch.com/2015/07/27/%20artificially-assured-destruction/.
16 | P a g e

According to Gabriel Hallevy, the primary question which arises is what kind of laws are
required to be enacted in order to tackle the situation, and who will decide the same?75
Generally, the two essential elements that are required to be proven in order to hold any
person as criminally liable are: the criminal act (actus reus) and the mental element (mens
rea). Keeping these two elements in mind, Hallevy proposed the following three models to
regarding AIs’ criminal liability:76

1. The Perpetration-via-Another Liability Model
Under this model, AIs aren’t considered to possess any human traits. Although this model
considers an AIs’ capability to act as a perpetrator of an offence, the AI is treated merely
as an ‘innocent agent’, or a person with limited mental capability like a minor, a mentally
incompetent person, or one who lacks a criminal state of mind. The person responsible
for making the AI commit the offence is considered to be the real perpetrator. Thus, in
case an AI commits an offence, the Perpetrator-Via-Another would either be its
programmer or its end user.

2. The Natural-Probable-Consequence Liability Model
This model assumes that the programmers or end users of AIs are deeply involved in the
everyday activities performed by the AIs, but without any intention of committing an
offence using the AIs as agents. For example, cases where an AI commits an offence
while performing its daily tasks. Under this model, the programmers or end users are held
liable and they are held so not because they had a criminal intention, but because of their
negligence; they should have known about the probability of the forthcoming commission
of the specific offence.




75 Gabriel Hallevy, The Criminal Liability of Artificial Intelligence Entities – From Science Fiction to Legal
Social Control 4 AKRON INTELL. PROP. J. (2010).
76 Id.
17 | P a g e

3. The Direct Liability Model
This last model considers that the AI’s actions don’t depend on its programmer or end
user; it treats the AI as an autonomous entity. Under this model, if both the essential
elements, actus reus and mens rea, of a specific offence are fulfilled, then the AI would
be held criminally liable similar to how a human or a corporation would be had she/ it
committed the same offence. Although proving actus reus would be a cakewalk, the
attribution of specific intent would be a difficult task.

It’s important to note here that the programmer or end user may be held criminal liability
along with the AI.77 These three models are to be considered together, and determined in the
particular context of the AI’s involvement.78

PUNISHMENT
An important aspect of criminal liability is punishment. In case of AIs a number of problems
arise. In case an AI is convicted for an offence for which punishment is imprisonment, how
would the AI be incarcerated?79 Similarly, how can an AI be sentenced to capital punishment
or probation?80 Since AIs don’t have any wealth, how pragmatic is it to impose a fine upon
them?81 Similar issues were encountered when the criminal liability of corporations was
initially discussed, and as the law was successfully modified for corporations, the same will
happen for AIs as well.82 Hallevy discusses how the following punishments for humans can
be modified accordingly for AIs:83

1. Capital Punishment
The deletion of the AI software would serve the same purpose for an AI as a capital
punishment would for a human.

2. Imprisonment


77 Id.
78 Id.
79 Supra note 5, at 23.
80 Erica Fraser, Computers as Inventors – Legal and Policy Implications of Artificial Intelligence on Patent Law
13 SCRIPTED (2016).
81 Supra note 5, at 23.
82 John C. Coffee, Jr., “No Soul to Damn: No Body to Kick”: An Unscandalised Inquiry into the Problem of
Corporate Punishment 79 MICH. L. REV. (1981).
83 Supra note 75.
18 | P a g e

The purpose behind putting a human behind bars is to deprive her of liberty and impose
severe restrictions on her freedom of movement.84 According to Hallevy, ‘liberty’ or
‘freedom’ in case of an AI refers to the freedom to act as an AI in its relevant area.85
Thus, putting the AI out of use in its field of work for a particular duration of time could
perhaps restrict its freedom and liberty similar to how incarceration does for humans.

3. Community Service
Similar to community service for humans, the AI offender can be made to work in an area
of its choice so as to benefit the society.

4. Fine
Imposing a fine on an AI would only be of beneficial in case it own any property or has
money. If this isn’t the case, the fine can be collected through community service.

CONCLUSION
In cases where the programmer has programmed the AI in good faith, there isn’t any
deficiency in programming and the AI acts autonomously, it should be held liable for causing
legal injury to anyone. For this it’s necessary that AIs are recognised as legal persons and
thus, an important question to ponder upon is whether AIs should be granted legal
‘personhood’? There is no country in the world which legally recognises AIs to be legal
persons.

A legal person is one who is subject to legal rights and duties. Legal personhood isn’t
confined to human beings alone; companies, ships etc. are also considered as legal persons.
Thus, AIs can fall within the ambit of legal personhood if it can be proven that they should be
subject to legal rights and duties.

Thus, it’s pertinent to answer the following three questions: whether artificially intelligent
entities are morally entitled to be recognised as separate legal entities?; whether doing so
would reflect a social reality?; and whether it would serve legal convenience?

Self-aware machines should be granted legal status since they are morally entitled to be
granted the same, and doing so would not only reflect a social reality but also be legally
convenient. In case of machines with limited memory and those with a theory of mind, they


84 David J. Rothman, For the Good of All: The Progressive Tradition in Prison Reform HISTORY AND CRIME
(James A. Inciardi & Charles E. Faupel eds., 1980).
85 Supra note 80.
19 | P a g e

should be recognised as legal persons on the basis of the second and third parameters. There
is no reason in favour of considering reactive machines as separate legal entities. The
particular set of legal rights and duties that AIs would be subjected to should be decided
carefully.

If a programmer/user programmes or uses an AI to take a specific action and it acts
accordingly which leads to a civil wrong, the programmer or user should be directly held
responsible. Thus, the programmers/users of reactive machines should be held liable for such
machines’ actions.

In cases where there is deficiency in programming, the AIs aren’t programmed to act in the
manner which gives rise to liability but they have the ability to make the autonomous
decisions that lead to it due to defective coding. In such situations, both the designer and the
AI should be held liable.

In cases where there is no deficiency in programming, legislators need to draft a liability
framework wherein the designers are able to exempt themselves from liability when they take
the maximum care possible, and at the same time, the victim of the AI’s unpredictable
evolving behaviour can be compensated for the legal injury caused to her. This can be done
through an insurance scheme. The European Parliament’s Resolution on Civil Law Rules of
Robotics with recommendations to the Commission on Civil Law Rules on Robotics
discusses insurance and a compensation fund.

The solutions suggested for autonomous AIs should be applicable to machines with limited
memory and those with a theory of mind, but not to self-aware AIs. For self-aware AIs, the
rules of liability should be those which are applicable to humans, mutatis mutandis.

The rules for criminal liability of AIs should be based on the three models given by Gabriel
Hallevy which are the Perpetration-via-Another Liability Model, the Natural-Probable-
Consequence Liability Model and the Direct Liability Model. The punishments for humans
should be modified accordingly for AIs.

While deciding the law relating to liability of AIs, it would be pertinent that the legislators
take a reasonable and balanced view with regard to the protection of rights of

20 | P a g e

citizens/individuals and the to encourage technological growth.86 If the same isn’t

need

achieved it may either adversely affect the protection of rights or innovation and creativity.

Moreover, the law must also be clear on the rights and duties of the programmers so as to
crystallize the broad ethical standards that they must conform to.87

In the film ‘I, Robot’ (2004), a robot is suspected of killing its own creator. If such a thing
happens in reality, the creator herself would be held liable for her own death. Wouldn’t this
be absurd? But then, so is the fact of not granting legal status to artificially intelligent entities.




86 Supra note 5, at 28.
87 Id. at 28.

21 | P a g e

Bibliography

1. HANSON ROBOTICS, https://www.hansonrobotics.com/sophia/.

2. Andrew Griffin, SAUDI ARABIA GRANTS CITIZENSHIP TO A ROBOT FOR THE
FIRST TIME EVER, INDEPENDENT, October 26, 2017,
https://www.independent.co.uk/life-style/gadgets-and-tech/news/saudi-arabia-robot-
sophia-citizenship-android-riyadh-citizen-passport-future-a8021601.html.

3. Zara Stone, Everything You Need To Know About Sophia, The World’s First Robot
Citizen, FORBES, November 07, 2017,
https://www.forbes.com/sites/zarastone/2017/11/07/everything-you-need-to-know-
about-sophia-the-worlds-first-robot-citizen/#7d38b73d46fa.

4. UNDP in Asia and the Pacific Appoints World’s First Non-Human Innovation
Champion, U.N.D.P. IN ASIA AND THE PACIFIC, November 22, 2017,
http://www.asia-
pacific.undp.org/content/rbap/en/home/presscenter/pressreleases/2017/11/22/rbfsinga
pore.html.

5. NISHITH DESAI ASSOCIATES, THE FUTURE IS HERE: ARTIFICIAL INTELLIGENCE AND
ROBOTS 13 & 14 (2018).

6. CAMBRIDGE DICTIONARY, https://dictionary.cambridge.org/.

7. Filipe Maia Alexandre, The Legal Status of Artificially Intelligent Robots 01, 68
(2017).

8. A.M. Turing, Computing Machinery and Intelligence 59 MIND 433, 460 (1950).

9. STUART J. RUSSELL & PETER NORVIG, ARTIFICIAL INTELLIGENCE: A MODERN
APPROACH (3rd edn. 2009).

10. GEORGE F. LUGER, ARTIFICIAL INTELLIGENCE: STRUCTURES AND STRATEGIES FOR
COMPLEX PROBLEM SOLVING (5th edn. 2004).

11. DAVID POOLE ET AL., COMPUTATIONAL INTELLIGENCE: A LOGICAL APPROACH (1998).

22 | P a g e

12. NILS J. NILSSON, ARTIFICIAL INTELLIGENCE: A NEW SYNTHESIS (1998).

13. Arend Hintze, Understanding the four types of AI, from reactive robots to self-
aware beings, THE CONVERSATION, November 14, 2016,
https://theconversation.com/understanding-the-four-types-of-ai-from-reactive-robots-
to-self-aware-beings-67616.

14. David Premack & Guy Woodruff, Does The Chimpanzee Have A Theory Of Mind? 1
BEHAV. & BRAIN SCI. (1978).

15. Benjamin D. Allgrove, Legal Personality For Artificial Intellects: Pragmatic Solution
Or Science Fiction? (2004).

16. ISAAC ASIMOV, I, ROBOT (1950).

17. L. B. Solum, Legal Personhood for Artificial Intelligences 70 N.C. L. REV. 1231,
1287 (1992).

18. JOHN CHIPMAN GRAY, THE NATURE AND SOURCES OF THE LAW (1909).

19. John Dewey, The Historic Background Of Corporate Legal Personality 35 YALE L.J.
(1926).

20. Pramatha Nath Mullick v. Pradyumna Kumar Mullick, (1925) 27 B.O.M.L.R. 1064
(India).

21. Yogendra Nath Naskar v. Commissioner Of Income Tax, 1969 A.I.R. 1089 (India).

22. Eleanor Ainge Roy, New Zealand River Granted Same Legal Rights As Human Being,
THE GUARDIAN, MARCH 16, 2017,
https://www.theguardian.com/world/2017/mar/16/new-zealand-river-granted-same-
legal-rights-as-human-being.

23. Tom Allen & Robin Widdison, Can Computers Make Contracts? 9 HARV. J.L. &
TECH. (1996).

24. EUROPEAN PARLIAMENT, CIVIL LAW RULES ON ROBOTICS (2017).

23 | P a g e

25. John P. Fischer, Computers As Agents: A Proposed Approach To Revised U.C.C.
Article 2 72 IND. L.J. (1997).

26. EUROPEAN PARLIAMENT’S COMMITTEE ON LEGAL AFFAIRS, REPORT WITH

RECOMMENDATIONS TO THE COMMISSION ON CIVIL LAW RULES ON ROBOTICS (2017).

27. Janosch Delcker, Europe divided over robot ‘personhood’, POLITICO, April 11,

2018, https://www.politico.eu/article/europe-divided-over-robot-ai-artificial-

intelligence-personhood/.

28. NATIONAL SCIENCE AND TECHNOLOGY COUNCIL OF THE EXECUTIVE OFFICE OF THE

PRESIDENT OF THE UNITED STATES OF AMERICA, PREPARING FOR THE FUTURE OF

ARTIFICIAL INTELLIGENCE (2016).

29. Sabine Gless et al., If Robots Cause Harm, Who Is To Blame? Self-Driving Cars And

Criminal Liability 19 NEW CRIM. L. REV. (2016).

30. Lucas Matney, Hawking, Musk Warn Of ‘Virtually Inevitable’ AI Arms Race,
TECHCRUNCH, July 27, 2015, https://techcrunch.com/2015/07/27/%20artificially-
assured-destruction/.

31. Gabriel Hallevy, The Criminal Liability of Artificial Intelligence Entities – From
Science Fiction to Legal Social Control 4 AKRON INTELL. PROP. J. (2010).

32. Erica Fraser, Computers as Inventors – Legal and Policy Implications of Artificial
Intelligence on Patent Law 13 SCRIPTED (2016).

33. John C. Coffee, Jr., “No Soul to Damn: No Body to Kick”: An Unscandalised Inquiry
into the Problem of Corporate Punishment 79 MICH. L. REV. (1981).

34. David J. Rothman, For the Good of All: The Progressive Tradition in Prison Reform
HISTORY AND CRIME (James A. Inciardi & Charles E. Faupel eds., 1980).

24 | P a g e


Click to View FlipBook Version