The words you are searching are inside this book. To get more targeted content, please make full-text search by clicking here.
Discover the best professional documents and content resources in AnyFlip Document Base.
Search
Published by Enhelion, 2021-11-09 01:36:36

Module_3

Module_3

MODULE 3

LEGAL REGULATION OF ARTIFICIAL INTELLIGENCE

“I have exposure to the very cutting-edge AI, and I think people should be
really concerned about it, I keep sounding the alarm bell, but until people
see robots going down the street killing people, they don’t know how to
react, because it seems so ethereal.” – Elon Musk

3.1. INTRODUCTION

As discussed in the previous chapters, AI is at the helm of revolution
today. Today, AI is present everywhere – in your phones, smart watches1,
vehicles2, your doorbell3, essentially everywhere. AI enabled devices is the
new normal. There is now a debate on sexual interaction with robots. Doll
alike sex-dolls are now a thing of past. Developers have been modifying
and adding automated movements to sex-dolls to provide more intimacy
and pleasure to consumers. Advances in AI are driven by quest to
overcome modern problems. Scientists are positive that inventions like
sex-bots4 will help people overcome their loneliness.

1 Jack Loughran, Smart watches equipped with advanced AI will monitor your every move, available at
https://eandt.theiet.org/content/articles/2017/09/smart-watches-equipped-with-advanced-ai-will-monitor-your-
every-move/, last accessed Feb 20, 2020.
2 Alyssa Schroer, ARTIFICIAL INTELLIGENCE IN CARS POWERS AN AI REVOLUTION IN THE AUTO
INDUSTRY, available at https://builtin.com/artificial-intelligence/artificial-intelligence-automotive-industry,
last updated Oct 15, 2019; last accessed Feb 20, 2020.
3 Christine, Face Recognition Doorbells: Is It the Right Time to Buy Now, available at
https://doorbellexpert.com/face-recognition-doorbell-guide/, last accessed Feb 20, 2020.
4 Martha Cliff, 'She is more than plastic': Married Japanese man 'finds love' with a SEX DOLL, available at
https://www.dailymail.co.uk/femail/article-3661804/Married-Japanese-man-claims-finally-love-sex-doll.html,
last accessed Feb 20, 2020.

Artificial Intelligence is witnessing two major revolutions simultaneously:
one among developers and the other in enterprises. These revolutions that
we are witnessing today are set to drive the technology decisions for at-
least the next decade. Developers around the globe are massively
embracing AI.

Many platform companies, like Microsoft, Google AI, Palantir are focused
on enabling developers to make the shift to the next app development
pattern, driven by the intelligent cloud and intelligent edge. AI is the
runtime that will power the apps of the future.

At the same time, tech and other enterprises are keen to adopt and
assimilate AI5. AI is changing how MNCs serve their customers, run
operations, and innovate. As a result, every business process in every
industry will be redefined in profound ways. There used to be a saying that
“software is eating the world,”6 now, it is true to say that “AI is eating
software”.

"Success in creating AI would be the biggest event in human history.
Unfortunately, it might also be the last, unless we learn how to avoid the
risks." -- Stephen Hawking.

Disruptive technologies arrive with regularity. Whether it is the first
industrial revolution, or subsequent revolutions which brought
communications, aviation and eventually digitisation. We stand at the edge

5 270% increase in Enterprise AI adoption over the past 4 years: Gartner, available at
https://content.techgig.com/270-increase-in-enterprise-ai-adoption-over-the-past-4-years-
gartner/articleshow/67658771.cms, last accessed Feb 20, 2020.
6 Marc Andreessen, Why Software Is Eating the World, Available at https://a16z.com/2011/08/20/why-
software-is-eating-the-world/, last accessed Feb 20, 2020.

of the next revolution the AI revolution where methods of artificial
intelligence and machine learning offer possibilities hitherto
unimagined. 7 Artificial intelligence technology (or AI) has developed
rapidly, and the effects of the AI revolution are already being keenly felt in
many sectors of the economy. The unique features of AI and the manner in
which AI can be developed present both practical and conceptual
challenges for the legal system. Organizations need to craft a reasonable
balance between what data is needed to have AI properly work and
people’s trust. In 2018 Congress had a few hearings on AI. These hearings
addressed some critical issues such as privacy and job displacement.8
According to a report published by the Brookings Institution, the Indian AI
industry “has seen growth, with a total of $150 million invested in more
than 400 companies over the past five years.”9

3.2. NEED FOR REGULATION

The usage of certain technologies should be regulated, to prevent the
misuse of the technology.

3.2.1. AI can be a double-edged sword

If highly advanced and complex AI systems are left uncontrolled and
unsupervised, they stand the risk of deviating from desirable behavior and
perform tasks in unethical ways. There have been many instances where

7 U. Beck, Risk Society, Towards a New Modernity (Sage, 1992); A. Giddens, Consequences of Modernity
(Polity, 1990); A. Giddens, Modernity and Self-Identity: Self and Society in the Late Modern Age (CUP, 1991).
8 Michael Hayes, This is the year of AI Regulations, Cognitive World, Retrieved May, 2020 at
https://www.forbes.com/sites/cognitiveworld/2020/03/01/this-is-the-year-of-ai-regulations/#424414807a81.
9 Shamika Ravi & Puneeth Nagaraj, Harnessing the Future of AI in India, BROOKINGS (Oct. 18, 2018),
https://www.brookings.edu/research/harnessing-the-future-of-ai-in-india/.

AI systems tried to fool its human developers by “cheating” in the way
they performed tasks they were programmed to do. For example, an AI
tasked with generating virtual maps from real aerial images cheated in the
way it performed its task by hiding data from its developers.

3.2.2. AI ethics is not enough

To prevent AI from doing things wrong (or doing the wrong things), it is
important for the developers to exercise more caution and care while
creating these systems. And the way the AI community is trying to achieve
this currently is by having a generally accepted set of ethics and guidelines
surrounding the ethical development and use of AI. Google recently
pledged to not use AI for military applications after its employees openly
opposed the notion.

3.2.3. AI safety can only be achieved by regulating AI

Legally regulating AI can ensure that AI safety becomes an inherent part
of any future AI development initiative. This means that every new AI,
regardless of its simplicity or complexity, will go through a process of
development that immanently focus on minimizing non-compliance and
chances of failure. To ensure AI safety, the regulators must consider a few
must-have tenets as a part of the legislation.10

10 Navin Joshi, Why Governemnts need to Regulate AI, Allerin (May 01,2019),
https://www.allerin.com/blog/why-governments-need-to-regulate-ai.

However, mere use of AI is now mundane. Companies are now looking
forward to exploit AI to reinvent and accelerate its processes, value chain
and business models.

As companies look towards creating new disruptions through Artificial
Intelligence and allied fields, role of governments to come up with policy
decisions and regulations becomes crucial.

With the current speed with which technology is evolving itself, regulation
has proved to be a tough job. Many jurisdictions around the world are
trying to come up with guidelines and laws to protect the rights of their
citizens, and to make companies accountable.

The truly transformative nature of the technology, yet the nascent stage of
its adoption worldwide, provides us with an opportunity to create a set of
rules to better assist and promote research. Government of India has
proposed #AIforAll - which implies inclusive technology leadership, where
the full potential of AI is realized in pursuance of the country’s unique
needs and aspirations.

AI regulation strategy should strive to leverage AI for economic growth,
social development and inclusive growth, for emerging and developing
economies. While AI has the potential to provide large incremental value
to a wide range of sectors, adoption till date has been driven primarily
from a commercial perspective.

Technology disruptions like AI are once-in-a generation phenomenon, and
hence large-scale adoption strategies, especially national strategies, need

to strike a balance between narrow definitions of financial impact and the
greater good.

The possibilities with AI are as wide ranging as helping doctors and
scientists come up with better cancer treatments, to being used as a lethal
weapon or worse, threatening humanity as a whole, as Elon Musk11 and
Stephen Hawking12 have forewarned.

But what happens when an injustice is caused to an individual not by
another person but by a machine or collection of them? With all the
benefits that come bundled with AI, there are many downsides too. Then,
in case of harm being caused to a person, who is to be held responsible?
The machine or its creator?

These are some tough questions that jurisdictions around the world have to
answer and come up with solutions for.

3.3. ISSUES IN REGULATION

3.3.1. Attribution of Liability

The challenges imposed by machines and robots are to a large extent the
same as those by humans. Under consumer protection laws, this raises the
issue of attribution of liability in case of a harm by an AI system. e.g. a
driverless car causing an accident should also attribute liability for
accidents that were foreseeable.

11 Supra note 1. at
12 Hawking calls for AI regulation in posthumously published essays, available
https://eandt.theiet.org/content/articles/2018/10/hawking-calls-for-ai-regulation-in-posthumously-published-
essays/, last accessed Feb 20, 2020.

3.3.2. Legal personality for AI applications

Unpredictability of AI applications is probably perceived as the biggest
challenge in regulating AI. It stems from the unpredictability of
automation. It is possible that AI application takes decisions or leads to
outcomes that are outside the purview of the design. Therefore a question
that has been raised is whether some types of AI application have a legal
personality that is distinct from its creator or operator. However, even if
AI were to have an independent legal personality, some of the liabilities
especially on account of criminal action would need to travel back to
human beings involved in its design and deployment. Then there are issues
like nationality to be given to a robot, if a distinct legal personality were to
be defined. Sophia, the Saudi Arabian humanoid comes to mind here.

3.3.3. Promoting AI

Asymmetry in current access and title to data to all potential developers of
AI has been identified as a key issue. This is in a way a competition issue.
Similarly, there are issues around IP rights for various algorithms that are
developed. Again such issues could be potentially addressed through IP
laws in how they would apply to patenting of algorithms, grant
inventorship and address relevant issues.13

While regulating AI is necessary, it should not be done in a way that stifles
the existing momentum in AI research and development. Thus, the

13 Natasha Nayak & Rajnish Gupta, Regulating Artificial Intelligence,Technology(Apr. 30, 2020),
https://www.mondaq.com/india/new-technology/914028/regulating-artificial-intelligence.

challenge will be to strike a balance between allowing enough freedom to
developers and bringing in more accountability for the makers of AI.

A key issue with technology regulation is that the horizon if this tech is so
broad that we cannot come up with a one-size-fits-it-all guidelines14. Such
a regulation might stifle growth and push us back on the innovation front.

AI tools can be highly complex, which means they require personnel with
deep AI, ML and data science skills. Today, cloud environments have also
become more complex, and require people who can understand the latest
trends in AI, ML and analytics in native cloud environments, in addition to
third-party tools.

AI and analytics involve a plethora of servers, storage, networking,
integration and security options, with their associated business and risk
implications. Hence, the guidelines or policy to regulate them must be
broad and flexible enough to facilitate them all.

Trusting a machine requires fairness, transparency in its working
mechanism and accountability. But even researchers cannot come up with
a single definition of fairness: it is always subjective depending on the use
and where the AI is to be deployed to evaluate the impact of bias.
Transparency in working of the AI mechanism, even if put in the simplest
of terms would not be understood by the masses. What the governments

14With AI and analytics, one size does not fit all: A better and faster path to business outcomes, available at

https://www.dxc.technology/analytics/insights/143677-

with_ai_and_analytics_one_size_does_not_fit_all_a_better_and_faster_path_to_business_outcomes, last

accessed Feb 20, 2020.

can set, is the accountability of companies and programmers if their
machines are found to be biased and against the greater good.

The AI regulation strategy must be aimed at primarily guiding an
inevitable wave of change for quicker and better impact. With the AI
ecosystem rapidly evolving and taking societies into uncharted territory.
We need to bridge the digital divide that is erupting. Data management
needs to be done ethically. There has been tremendous activity concerning
AI policy in different countries over the past couple of years.

AI has been hailed as revolutionary and world-changing, but it's not
without drawbacks. It is stated that AI guiding weapon systems can be
scary. Similarly, abusing the technology will have disastrous
consequences. Job automation is viewed as the most immediate concern.
As AI robots become smarter and more dextrous, the same tasks will
require fewer humans. Widening socioeconomic inequality sparked by AI-
driven job loss is another cause for concern. The key question for
humanity today is whether to start a global AI arms race or to prevent it
from starting.15The current debate of facial recognition software reflects
the dilemma. However, there are concerns about the purpose and safety of
its usage.

3.4. LETHAL AUTONOMOUS WEAPONS SYSTEM (LAWS)

This brings us to the most glaring part: regulation of robots and automated
weapons for military use.

15 Mike Thomas, Risks of Artificial Intelligence, Artificial Intelligence (Apr. 07, 2020),
https://builtin.com/artificial-intelligence/risks-of-artificial-intelligence.

The United Nations Interregional Crime and Justice Research Institute
(UNICRI) established a center on AI and robotics with an aim to “help
focus expertise on Artificial Intelligence (AI) throughout the UN in a
single agency.”16 The center is situated in the Hague, Netherlands. This
center focusses on “understanding and addressing the risks and benefits of
AI and robotics from the perspective of crime and security through
awareness-raising, education, exchange of information, and harmonisation
of stakeholders.”17 Since its establishment in 2015, UNICRI has developed
a large international network of stakeholders with whom it collaborates,
including the INTERPOL18, the International Telecommunications Union
(ITU)19, the Institute of Electrical and Electronics Engineers (IEEE), the
Foundation for Responsible Robotics, the World Economic Forum to
name a few.

Efforts to regulate AI and military use of technological advancements are
already underway and bearing fruit. The use of conventional weapons that
have affect civilians indiscriminately have been banned via The
Convention on Certain Conventional Weapons, adopted on Oct 10, 1980
and in effect from Dec 2, 198320.

16 AI Policy – United Nations, FUTURE OF LIFE INSTITUTE, https://futureoflife.org/ai-policy-united-nations/

(last visited Feb 20, 2020) .
17 Centre on Artificial Intelligence and Robotics, UNICRI, http://www.unicri.it/topics/ai_robotics/centre/ (last

visited Feb 20, 2020).
18 https://artificialintelligence-news.com/2018/07/17/interpol-ai-impact-crime-policing/, last accessed Feb 20,

2020
19 International Telecommunications Union (ITU), available at https://digitalcooperation.org/wp-

content/uploads/2019/02/International-Telecommunciations-Union.pdf, last accessed Feb 20, 2020.
20 The Convention on Certain Conventional Weapons, THE UNITED NATIONS OFFICE AT GENEVA

(UNOG), available at

https://www.unog.ch/80256EE600585943/(httpPages)/4F0DEF093B4860B4C1257180004B1B30?OpenDocum

ent (last visited Feb 20, 2020).

CCW was adopted and ratified when there wasn’t much debate on
increasing automation and dual-use of technology for civilian and military
use. But after 2013, the convention has also discussed LAWS, or the
Lethal Autonomous Weapons System. LAWS are proposed robot fighter21,
like never before, which can identify their target and fire incessantly at
them. While such robots do not exist yet, the technology to train AI to
shoot does22.

As countries like China plan to use AI enabled missiles23 in their future
military expedience, Public Policy and Artificial Intelligence researchers
called upon the international community to place a ban on the use of
LAWS through an open letter24.

It is during such times that we need to bring on regulations and guidelines
on the use of such lethal technology. Under the chairmanship of
Ambassador Amandeep Singh Gill of India, a group of Governmental
Experts on LAWS met in Geneva and held meetings in 201725 and 201826
to discuss regulations on LAWS.

21 Kelsey Piper, Death by algorithm: the age of killer robots is closer than you think, available at
https://www.vox.com/2019/6/21/18691459/killer-robots-lethal-autonomous-weapons-ai-war, last accessed Feb
20, 2020.
22 Ibid.
23 China eyes artificial intelligence for new cruise missiles, available at
https://www.dailystar.com.lb/News/World/2016/Aug-19/367933-china-eyes-artificial-intelligence-for-new-
cruise-missiles.ashx, last accessed Feb 20, 2020.
24 AUTONOMOUS WEAPONS: AN OPEN LETTER FROM AI & ROBOTICS RESEARCHERS, available at
https://futureoflife.org/open-letter-autonomous-weapons/, last accessed Feb 20, 2020.
25 2017 Group of Governmental Experts on Lethal Autonomous Weapons Systems (LAWS), UNOG,
https://www. unog.ch/80256EE600585943/(httpPages)/F027DAA4966EB9C7C12580CD0039D7B5, last
visited Feb 20, 2020.
26 2017 Group of Governmental Experts on Lethal Autonomous Weapons Systems (LAWS), UNOG,
https://www. unog.ch/80256EE600585943/(httpPages)/F027DAA4966EB9C7C12580CD0039D7B5, last
accessed Feb 20, 2020.

Amnesty International urged the states to ban automated weapons “before
it was too late”27. The 2018 convention ended with nations emphasizing
the importance of retaining human control over weapons systems and the
use of force and expressed support for developing new international law
on lethal autonomous weapons systems.” Twenty-six of these states called
for a total ban (including Austria, Brazil, and Egypt) and China has also
called for a “new CCW protocol to prohibit the use of fully autonomous
weapons systems.”

3.5. REGULATION AROUND THE WORLD

Governments in USA, UK, France, Japan and China have released their
policy and strategy papers relating to AI. In order to establish a leadership
role, it is important for developing nations like India to take the plunge and
start by releasing a draft paper to initiate the roll out of an ambitious
programme that would ensure its rightful place in this transformational
era.

President Trump signed an Executive Order 28 on Feb 11, 2019 to
‘Maintain America’s Leadership in AI’ by sustaining R&D investments,
and removing research barriers, and most importantly to promote an
international environment that is supportive of American AI innovation
and its responsible use.

27 UN: Decisive action needed to ban killer robots - before it’s too late, available at
https://www.amnesty.org/en/latest/news/2018/08/un-decisive-action-needed-to-ban-killer-robots-before-its-too-
late/, last accessed Feb 20, 2020.
28 Maintaining American Leadership in Artificial Intelligence, available at
https://www.federalregister.gov/documents/2019/02/14/2019-02544/maintaining-american-leadership-in-
artificial-intelligence, last accessed Feb 20, 2020.

Need for AI regulation and proper accountability in case of mishap is the
need of the hour because AIs are found to be prone to biases. The data that
the AI algorithms are trained on makes them gender and racial bias in
many cases29. The COMPAS30 software that is widely used in the US
criminal system to predict whether a criminal is more prone to repeat his
offence is found to be biased towards colored people and women.

The worst-case scenario31 by Elon Musk is that technology becomes so
advanced and in control of firepower that it threatens human existence.
However, Oren Etzioni counters 32 that AI will never be a threat to
humanity and calls for a human-robot partnership.

The US Congress recently introduced the Algorithmic Accountability Act
of 201933 to “direct the Federal Trade Commission to require entities that
use, store, or share personal information to conduct automated decision
system impact assessments and data protection impact assessments.” i.e. to
make the companies liable for any bias in their algorithms. Companies
which write the codes are held accountable as they are the ones deemed to
be responsible for coding and giving life to a machine.

29 Racial Bias and Gender Bias Examples in AI systems, available at https://medium.com/thoughts-and-
reflections/racial-bias-and-gender-bias-examples-in-ai-systems-7211e4c166a1, last accessed Feb 20, 2020.
30 Machine Bias, available at https://www.propublica.org/article/machine-bias-risk-assessments-in-criminal-
sentencing, last accessed Oct 20, 2019.
31 Tim Higgins, Elon Musk Lays Out Worst-Case Scenario for AI Threat, available at
https://www.wsj.com/articles/elon-musk-warns-nations-governors-of-looming-ai-threat-calls-for-regulations-
1500154345, last accessed Feb 20, 2020.
32 Oren Etzioni, No, the Experts Don’t Think Superintelligent AI is a Threat to Humanity, available at
https://www.technologyreview.com/s/602410/no-the-experts-dont-think-superintelligent-ai-is-a-threat-to-
humanity/, last accessed Feb 20, 2020.
33 Algorithmic Accountability Act of 2019, available at https://www.congress.gov/bill/116th-congress/house-
bill/2231/all-info, last accessed Feb 20, 2020.

3.6. RISKS OF REGULATION

Experts are worried that over regulation by legislatures may kill
innovation of scientists and coder. The basis of any innovation is mistakes.

In fact, the process of learning anything new involves committing
mistakes. You have a hypothesis, you experiment, you conduct tests, you
fail, you learn and then finally after learning from your mistakes, you
succeed.

The same applies to any scientific innovation. Scientists innovate and
create new products. However, if they are regulated and coders are
threatened with sanctions if their products are found to be faulty or biased.
There are times when the coders cannot even predict any fault or inherent
biasness that might be developed in the codes they write. The fault is
magnified only when the algorithm is used on a wide user base which is
general the public.

So, if the coders and IT experts are threatened with regulations, they might
not innovate and/or create new products. This would initiate self-
regulation by young researchers and certainly stifle innovation.

A similar conundrum was faced when the internet was in its nascent stage.
Then, the US had avoided34 regulating it heavily because that would have
stunted growth of early businesses and possibly the internet as we know it
would not have come up.

34Adam Thierer, 15 Years On, President Clinton's 5 Principles for Internet Policy Remain the Perfect Paradigm,
available at https://www.forbes.com/sites/adamthierer/2012/02/12/15-years-on-president-clintons-5-principles-
for-internet-policy-remain-the-perfect-paradigm/, last accessed Feb 20, 2020.

Artificial intelligence systems have the potential to change how humans do
just about everything. Scientists, engineers, programmers and
entrepreneurs need time to develop the technologies – and deliver their
benefits. Their work should be free from concern that some AIs might be
banned, and from the delays and costs associated with new AI-specific
regulations.


Click to View FlipBook Version