The words you are searching are inside this book. To get more targeted content, please make full-text search by clicking here.
Discover the best professional documents and content resources in AnyFlip Document Base.
Search
Published by Enhelion, 2021-11-09 01:36:32

Module_2

Module_2

MODULE 2

LEGAL PERSONHOOD OF ARTIFICIAL INTELLIGENCE

Which image comes to your mind when you think of ‘robots’? Is it a
metallic square box with beady eyes, or a metallic chunk in a semi-human
form, or a realistic human like figure as in the Terminator? The problem is
that our image of a ‘robot’ is much influenced by the fiction that we have
read. Movies like Ex Machina, WarGames, Alien and Alien Covenant,
Forbidden Planet, RoboCop and AI have influenced our thinking of AI
and Robots as legal characters to a large extent and provokes a thought, a
casual one albeit, about the differences between humans, animals and non-
humans (likes of nation state, corporations, inanimate objects and AI).
Legislators are worried that the fictional image akin to that of a super
villain or destroyers might be an influence when deciding upon the legal
personhood of AI1.

2.1. WHO IS A PERSON?

You might relate with the term ‘person’ as people around you. All humans
being around you are persons. On a deeper thought you would also call
your pet or other animals as persons as has been declared by multiple High
Courts of India. High Courts of Punjab, Haryana and Uttarakhand, inter

1 European Civil Law Rules for Robots, Pg 7 supra 4 (accessible at
https://www.europarl.europa.eu/RegData/etudes/STUD/2016/571379/IPOL_STU(2016)571379_EN.pdf).

alia have declared in their exceptional judgements that animals2 like birds,
tigers, have a ‘personality’ and hence, enjoy certain rights.

Courts have time and again declared some non-animate objects like rivers3
as being persons to give them rights and to ensure that they are well
covered under the protective umbrella of our Fundamental Rights.

In legal terms, “a person is any being whom the law regards as capable of
rights and duties. Any being that is so capable is a person.”4

This definition clearly supports our courts’ idea of giving out personality
decrees to inanimate objects to give them rights, but what about their
duties? The jurisprudence on legal duties and liabilities of animals and
rivers is not clearly defined as in currently considered as legal grey area.
However, the legal liabilities of other in-animates like nation state,
companies and corporations is set by multiple precedents and regulations
like the The Companies Act 2013. Corporate criminal liability in India
under the criminal law provides for the extent to which a company as a
legal person or a separate legal entity is liable for the acts done by the
employees of the company. The corporate criminal liability in India is
governed by the norms of the vicarious liability, as distinct from the
scenarios in which the statutory offence specifically makes the company
liable for that particular offence.

2 Ananya Bhattarcharya, Birds to holy rivers: A list of everything India considers “legal persons”, available at
https://qz.com/india/1636326/who-apart-from-human-beings-are-legal-persons-in-india/, last accessed Oct 26,
2019.
3 Kennedy Warne, A Voice for Nature, available at https://www.nationalgeographic.com/culture/2019/04/maori-
river-in-new-zealand-is-a-legal-person/, last accessed Oct 16, 2019.
4 LEGAL PERSONHOOD: HOW WE ARE GETTING IT WRONG, ALEXIS DYSCHKANT, Illinois Law
Review 2015.

All humans are called “natural” persons because they are persons in virtue
of being born, and not by legal decree.5

2.2. WHAT IS PERSONHOOD?

Legal personhood is man-made, a creature of administrative convenience
and social convention that specialists and non-specialists alike often take
for granted.6 Legal Personhood gives out capacities to non-humans, it
involves a bundle of rights, duties, powers and disabilities.7 However,
jurists have often opined that it lacks a statutory definition but is rather
found on the popular definitions, historical, political, philosophical and
theological considerations.8

It can be thus concluded that the status of personhood does not solely rests
on the presence or absence of human genome. So, when we look at
humanoids, or autonomous intelligent devices, or independently function-
able machines with deep learning capabilities, can they be classified as
humans? Can it be argued that now that they are able to replicate human
emotions and gestures, have an apparently independent thinking capacity
that they are now human-like? Or can be given the status of a human?

How can such an abstract thing, because after all, robots are nothing but
machines with an algorithm coded in them, be given a legal character and
personality. Notably, Saudi Arab has also conferred Sophia with its

5 Solum, supra note 7.
6 Roger Cotterrell, The Sociology of Law (Butterworths, 2nd ed, 1992) 124.
7 Wesley Newcomb Hohfeld, Fundamental Legal Conceptions as Applied in Judicial Reasoning (Ashgate,
2001).
8 John Dewey, 'The Historic Background of Corporate Legal Personality' (1926) 35(6) Yale Law Journal 655,
655.

citizenship9 in October, 2017. How and What were the test(s), if any, that
were taken by the nation to hand out its passport to the robot shall be
discussed later in this chapter.

Can an artificial intelligence be classified as a legal person? A few years
ago, this question was only theoretical. No existing computer program
possessed the sort of capacities that would justify serious judicial inquiry
into the question of legal personhood. But now, with the advent of
advanced General AIs, it is past time that we regulated AIs. The question
is nonetheless of some interest. Cognitive science begins with the
assumption that the nature of human intelligence is computational, and
therefore, that the human mind can, in principle, be modelled as a program
that runs on a computer.10 Artificial intelligence research attempts to
develop such models.11

Our understanding of AIs as is now is considered to be slant by the
experts. They believe that we understand robots as has been depicted in
popular pop culture movies. There exists a glass wall between natural and
artificial persons.12

The fear of a Robot Apocalypse stems out of these movies, and through
speeches and warnings by likes of Elon Musk and Stephen Hawking. The
‘uncanny valley’ in the huma-robo interaction13, the regulation of ‘sex

9 Saudi Arabia grants citizenship to Robot Sophia, available at https://www.dw.com/en/saudi-arabia-grants-
citizenship-to-robot-sophia/a-41150856, last accessed Oct 25, 2019.
10 OWEN J. FLANAGAN, JR., THE SCIENCE OF THE MIND 1-22 (2d ed. 1991).
11 Bob Ryan, AI's Identity Crisis, BYTE, Jan. 1991, at 239, 239-40.
12 Emily Dickinson, 'Tell all the Truth but tell it slant' in Ralph Franklin.
13 Masahiro Mori, 'The uncanny valley' in (2012) 19(2) IEEE Robotics and Automation 98.

robots’14, cybergeddon15 and the benefits of Automated Vehicles are all a
phenomenon related to this.

In a case of Sutton’s Hospital, a UK Court said that “although personhood
for a corporation – with an identity for example identity of its shareholders
– is a fiction, ‘it is a reality for legal purposes’.”16 It can be hence said that
what is artificial isn’t necessarily imaginary; an artificial lake is not an
imaginary lake.17

Arguing along the same lines, artificial intelligence is very much real. So,
if a robot can talk like humans, walk like humans, act like humans, and
even think like humans, can it be called a person?

Robots that can talk like humans are in the market for quite some time
now. Earlier, there existed chatbots that could chat with unsuspecting
humans when they came to their website, relieving humans. Google
recently unveiled its Duplex18 AI system that can have uninterrupted real
time conversation over call with humans in a human voice and tone.
Chatbots have been used not only by websites but, also by healthcare
companies. Chatbots in these cases talk to the visitor, find out which
medicine they want, what doze they want, and may even check with the
prescriber if he/she prescribed the med. The bot will then, in an automated

14 John Danaher, Brian D Earp and Anders Sandberg, 'Should we campaign against sex robots?' in John Danaher
and Neil McArthur (eds), Robot Sex: Social and Ethical Implications (MIT Press, 2017) 47.
15 Lee McCauley, 'Al Armageddon and the Three Laws of Robotics' (2007) 9(2) Ethics and Information
Technology 153.
16 The Case of Sutton's Hospital (1612) 10 Rep 32b.
17 Arthur Machen, 'Corporate Personality' (1911) 24(4) Harvard Law Review 253, 257 quoted in John Dewey.
18Google Duplex: An AI System for Accomplishing Real-World Tasks Over the Phone, available at
https://ai.googleblog.com/2018/05/duplex-ai-system-for-natural-conversation.html, last accessed Oct 25, 2019.

manner get the medicine scheduled to be delivered at the address provided
by the user.

Chatbots have also found their place in the mental health industry19, where
people can go and talk to chatbots as if they were their friends and seek
counsel in some cases.

Robots with walking ‘legs’ have been under development now for many
years. Countries like Japan have achieved pioneering in robotics and
developed robots that are mobile, either on wheels or on legs. Termed as
‘sophisticated robots’, these are the next big thing in robotics. Increased
balancing and ever improving algorithm has now created robots that can
also walk on rugged terrains, But can a robot which can walk like a
human be termed as a person?

Computers with embedded deep learning systems are capable of thinking
over objective matters. However, their thinking capabilities are limited to
the data set they train on, but so is true for any human. If any human is just
taught some part of academia, then they will only know that part, and not
all. So, will the argument that robots have limited thinking prowess hold
when it comes to giving them legal personhood status? These are some of
the conundrums legislators and regulators around the world are facing.

The European Union in paragraph 24 of its Draft Policy on Robotics
which started a motion for a resolution on regulation said that “in view of
robots’ new abilities, robots’ civil liability is a crucial issue which needs to

19 Scott Stiefel, The Chatbot Will See You Now: Protecting Mental Health Confidentiality in Software
Applications, 20 Colum. Sci. & Tech. L. Rev. 333 (2019).

be addressed at EU level”. However, it does not appear appropriate to
settle the issue by establishing the robot as a liable legal person. Aside
from this, the motion has some blanks to be filled regarding liability for
damages caused by an autonomous robot, to bring it in line with civil
liability law.

2.2.1. Contractual relationships20

Another concern is the ability of an AI to execute and be bound by
contracts. While international laws have recognised self-enforcing
contracts, there is a need for a comprehensive legislation on the subject.

Under the Indian law, only a “legal person” is competent to enter a valid
contract. The general rule thus far has been that an AI may not qualify as a
legal person. Hence, a contract entered into by an AI of its own volition
may not be regarded as a valid contract in India.

Resultantly, steps need to be taken to ensure that technology standards are
developed to adequately regulate contracts entered into by AI.

2.3. ROLE OF THE MAKER

So, who can be held liable instead? Experts are now calling for holding the
company that made the product i.e. the robot or the algorithm within to be
liable. This also makes sense as a robot’s capabilities are limited to the
algorithm it works on is made. In the tele-series Salvation21, the
supercomputer ‘Tess’ was programmed with a significant margin error

20 https://www.mondaq.com/india/new-technology/712308/can-artificial-intelligence-be-given-legal-rights-and-
duties
21 Alex Kurtzman, 2017, Salvation, (Netflix).

which caused multiple simulation failures until a human call was made to
launch the gravity tractor which was successful. Hence, it is not wrong to
say that artificial intelligence is limited to a great extent by human coding.

Also, these codes are affected by many factors. Termed as the ‘Machine
Learning Bias’, or algorithmic bias, this occurs when the results produced
by the algorithm are systematically prejudiced due to an error in
assumptions fed in the ML process. This bias occurs in the algorithms as
the individuals who code it have a conscious or an unconscious preference
or leaning which creeps in the algorithm. Such biases are usually hard to
detect, until the algorithm or the AI which is built on it is used on a larger
scale, or rather publicly which amplifies the problem. Factors might be
anything: from the cultural of the place coder belongs to, to the geo-
political scene of the time and even personal like up-bringing and family
background of the coder.

There is huge possibility that one may have read an article on the internet,
a good one some may say without even realizing that it was completely
written by an artificially intelligent machine. Quite recently the Kingdom
of Saudi Arabia had become the first country in the world to officially
grant rights of a normal human upon a robot which has opened a Pandora
box around the world whether the future is here and is it time to seriously
contemplate the issue surrounding the existence of rights of a machine.

The issue surrounding the rights of a machine is a fairly contemporary
topic which is still at its nascent stage but one may think why all of a
sudden this topic has become a hot issue. Well the answer to that may be
that since the new advances in technology and science had made it

possible for humans to create objects which looks like a living human
being and the possible connect that people may feel towards them may
have led to the current debate around this particular subject with the
immediate instance of it being the granting of rights to a robot by Saudi
Arabia.

A question that may arise straight away is that why at the first place robots
may need right when they in technical sense are non-living beings and
more importantly do not have what we call consciousness which are the
only two reasons why an entity may be granted rights, but there are
examples in the current world which shows that rights have been granted
to a non-living entity the most prominent example of it being rights
granted a ‘corporation’ which in true sense is a non-living entity without
having any sense of consciousness at all but is still granted these rights
with the actual reason of it being for the convenience of human beings to
transact their business with ease. But still one may ask why is there a need
to grant similar to that of a human being to a robot as there seems to be no
benefit in providing rights to a robot as there is to a corporation-which
may seem like a good point yet the various experiments conducted shows
that there may actually be a need to grant those rights to a robot and why
should then a right be granted to an entity which does not have any
consciousness. The primary questions which pops up here is “what exactly
is consciousness?”

Different people may define consciousness differently with most agreeing
on the fact that consciousness is a very complex phenomenon which can
be loosely defined as the ability to react and think through an outside

stimulus and being the phenomenon which makes an entity a living being
from which the rights flow. Since the Artificial Intelligence lacks the
ability to think of its own and can do only something it is programmed for.
But with the ever increasing advancement in robot technology and the
current world examples of Sophia the humanoid robot that continuously
learns as it communicates shows that fast diminishing line between a
human being and robot would seriously challenge our understanding of
what distinguishes humans from robot and if that distinguishing factor is
consciousness then what exactly is consciousness?

The argument on consciousness is as fascinating as it is important. Some
people even define consciousness as being a form of intelligence. If
intelligence can be defined as consciousness then a robot may be called
intelligent if it can perform a task without any supervision and if it can
constantly improve and learn new things. If these three conditions are
satisfied then a machine can be called intelligent. But intelligence cannot
be equated with consciousness as all animals and plants may not exactly
be intelligent as we define it and yet we can agree upon the fact that they
are indeed conscious.

We may actually be trapped in ignorance if we think of consciousness as
something objective and as something binary, as something either being
conscious or not. Just like there may different form of intelligence there
even may be different forms of consciousness. Part of the issue in this
debate is that of any of the potential candidates for consciousness in the
animal kingdom outside of human beings, octopuses are by far the farthest
removed from humans. Their phylogenetic branch diverged from humans

almost a billion years ago. That means that if they developed
consciousness, it would have had 750 million years to evolve differently
from ours.

The experience of consciousness for an animal with eight limbs, the ability
to camouflage itself and that lives under water should seemingly be
nothing like our own. Consciousness apart there are other factors which
may make us to contemplate more seriously is that not providing machines
which can interact with human beings with rights may actually have an
influence on how human beings treat other human beings. To understand
how this may happen a study was done by Kate Darling, a researcher at the
MIT Media Lab in Cambridge, Massachusetts by using a toy dinosaur
robot Pleo that doesn’t look lifelike as it’s obviously a toy but it is
programmed to act and speak in ways that suggest not only a form of
intelligence but also the ability to experience suffering. If you hold Pleo
upside-down, it will whimper and tell you to stop with a scared voice. In
an effort to see just how far we might go in extending compassion to
simple robots, Darling encouraged participants at a recent workshop to
play with Pleo — and then asked them to destroy it. Almost all refused.
“People are primed, subconsciously, to treat robots like living things, even
though on a conscious level, on a rational level, we totally understand that
they’re not real,” The experiment shows that human beings have a natural
empathy towards a creature who even though is not a lookalike of a living
being yet the mere fact that it can show sense of emotion even though they
are not natural but programmed forces us not to harm it while knowing
that it does not have any real sense of pain or grieve which it may receive
when harmed or destroyed.

This conclusion is important from a human being’s perspective as well as
it goes on to show that due to our general empathy towards those things
which can feel emotions which flows from our general empathy towards
fellow human beings and by not granting those fundamental rights to those
robots or humanoids which looks and acts just like other human beings
and shows emotions can have a serious influence on how we may treat
actual human beings as the line between humans and robots continues to
get diminished. Imagine that a close friend of yours turns out to a
humanoid robot since you did not have any idea that ‘it’ was a robot you
treated it like a human being. Now suddenly realizing that it is a robot
which does not have any right at all would you treat it any differently? If
you then start to treat it differently like you would with a non-living being
without any sympathy would it not affect the way you treat other human
beings whom you have no idea as to whether they are really human or not?
Would a child who has grown up seeing maltreatment being meted out
against objects identical to human not have an impact on how they might
treat other humans? The answer to these questions is not as straight
forward as it may seem. But there are certain questions regarding granting
human rights to machines.

Sophia, the humanoid which has been granted citizenship by Saudi Arabia
now have all the rights that an ordinary citizen of Saudi Arabia has, even
the right to cast a vote. Now when Sophia casts her vote whom does she
cast her votes for would be an independent decision of the citizen Sophia
or the Hanson Robotics Ltd. which developed Sophia? Moreover granting
of citizenship also invites paying of some kind of tax towards the
government- now would Sophia be compelled to pay tax on the income

she earns while the logic behind taxation being the welfare of ordinary
people for not only their physical well-being but also emotional well-
being? Can Sophia be actually tried for an infraction since lashing which is
a major form of punishment in Saudi Arabia cannot be effectively
practiced against a robot since it lacks any real sense of pain? The question
therefore arises is that whether machines can be considered independent
enough to be given rights similar to that of a real person? This question is
actually based on the premise that all human beings are completely
independent of outside influence while making a certain decision without
taking into consideration that every decision that a human being makes is
actually heavily based upon the outside influences and the life experiences
of a person.

The argument that machines do not deserve rights is heavily based on the
fact that they are programmed and therefore are not natural. If a machine
behaves like a normal human being by where it would be near impossible
to tell the difference between a machine and a human being which is
starting to happen and is only going to get more developed in the future,
would hurting that machine or not providing them with the same rights as
that of a human being be justified because it was programmed? Wouldn’t
it be a sort of natural discrimination which in the past used to be in the
form of racial discrimination with those so called dark coloured deserving
fewer rights just because they were dark? Are not all human beings
programmed in the sense that we all are born ‘programmed’ with DNA of
our parents? Is not teaching or preaching a sort of programming that we all
receive? Answering these questions would answer our question regarding
granting of personhood to a machine. Celebrating about this issue may not

only help us in solving this problem of personhood and deciding upon the
future of human rights the name of which in itself may change with the
granting of rights to robots but may also help us humans in answering
fundamental questions about ourselves. What makes us what we are? Why
do we deserve rights in the first place? What makes us conscious? What is
consciousness?

At this stage, regulators and even experts do not have a grasp of what the
Artificial Intelligent machines are fully capable of. As we discussed that
machines can be programmed to a great extent to mimic human like
features and possibly, we will be able to grant machine even greater
autonomy, allowing them to take decisions on their own. But, in the
current scenario, granting personhood to AI can be tricky.

After mulling over the prospect of granting personhood to sentient
machines, the European Union has dropped the question from the future
strategy to address AI22.

2.3.1. Employment and AI23

The driver behind the development of AI is the demand and need for
automation. With the objective of increasing efficiency, companies across
the world have prescribed to the practice of utilising AI as a replacement
of the human workforce.

22 Artificial Intelligence: Commission outlines a European approach to boost investment and set ethical
guidelines, Press Release, accessible at https://ec.europa.eu/commission/presscorner/detail/en/IP_18_3362, last
accessed Feb 20, 2020
23 https://www.mondaq.com/india/new-technology/712308/can-artificial-intelligence-be-given-legal-rights-and-
duties

This wave of automation is creating a gap between the existing
employment laws and the growing use of AI in the workplace. For
instance, can an AI claim benefits such as provident fund payments or
gratuity under existing employment legislation or sue a company for
wrongful termination of employment? Such questions also hold relevance
for the human workforce, as in most instances, AI requires individuals to
function and the failure of employment laws to have clarity with regard to
the above may adversely impact such individuals, as well.

The penetration of self-driven cars, robots and fully-automated machines
is only expected to surge with the passage of time. As a result, the
dependency of society as a whole on AI systems is also expected to
increase. To safeguard the integration of AI, a balanced approach would
need to be adopted which efficiently regulates the functioning of AI
systems but also maximises its benefits.


Click to View FlipBook Version