The words you are searching are inside this book. To get more targeted content, please make full-text search by clicking here.
Discover the best professional documents and content resources in AnyFlip Document Base.
Search
Published by Sthita Patnaik, 2019-11-15 12:56:21

AI & Law - MOD3

AI & Law - MOD3

MODULE 3
REGULATION OF ARTIFICIAL INTELLIGENCE

“I have exposure to the very cutting-edge AI, and I think people should be really
concerned about it, I keep sounding the alarm bell, but until people see robots
going down the street killing people, they don’t know how to react, because it
seems so ethereal.” 1 – Elon Musk

INTRODUCTION

As discussed in the previous chapters, AI is at the helm of revolution today.
Today, AI is present everywhere – in your phones, smart watches2, vehicles3,
your doorbell4, essentially everywhere. AI enabled devices is the new normal.
There is now a debate on sexual interaction with robots. Doll alike sex-dolls are
now a thing of past. Developers have been modifying and adding automated
movements to sex-dolls to provide more intimacy and pleasure to consumers.
Advances in AI are driven by quest to overcome modern problems. Scientists are
positive that inventions like sex-bots 5 will help people overcome their
loneliness.

1 https://www.theverge.com/2017/7/17/15980954/elon-musk-ai-regulation-existential-
threat, last accessed Oct 20, 2019.
2 https://eandt.theiet.org/content/articles/2017/09/smart-watches-equipped-with-
advanced-ai-will-monitor-your-every-move/, last accessed Oct 20, 2019.
3 https://builtin.com/artificial-intelligence/artificial-intelligence-automotive-industry, last
updated Oct 15, 2019; last accessed Oct 20, 2019.
4 https://doorbellexpert.com/face-recognition-doorbell-guide/, last accessed Oct 20,
2019.
5 https://www.dailymail.co.uk/femail/article-3661804/Married-Japanese-man-claims-
finally-love-sex-doll.html, last accessed Oct 20, 2019.

Artificial Intelligence is witnessing two major revolutions simultaneously:
one among developers and the other in enterprises. These revolutions that
we are witnessing today are set to drive the technology decisions for at-
least the next decade. Developers around the globe are massively
embracing AI.

Many platform companies, like Microsoft, Google AI, Palantir are focused
on enabling developers to make the shift to the next app development
pattern, driven by the intelligent cloud and intelligent edge. AI is the
runtime that will power the apps of the future.

At the same time, tech and other enterprises are keen to adopt and
assimilate AI 6 . AI is changing how MNCs serve their customers, run
operations, and innovate. As a result, every business process in every
industry will be redefined in profound ways. There used to be a saying that
“software is eating the world,”7 now, it is true to say that “AI is eating
software”.

NEED FOR REGULATION

However, mere use of AI is now mundane. Companies are now looking
forward to exploit AI to reinvent and accelerate its processes, value chain
and business models.

As companies look towards creating new disruptions through Artificial
Intelligence and allied fields, role of governments to come up with policy
decisions and regulations becomes crucial.

6 https://content.techgig.com/270-increase-in-enterprise-ai-adoption-over-the-past-4-
years-gartner/articleshow/67658771.cms, last accessed Oct 20, 2019
7 https://a16z.com/2011/08/20/why-software-is-eating-the-world/, last accessed Oct 20,
2019.

With the current speed with which technology is evolving itself, regulation has
proved to be a tough job. Many jurisdictions around the world are trying to come
up with guidelines and laws to protect the rights of their citizens, and to make
companies accountable.

The truly transformative nature of the technology, yet the nascent stage of its
adoption worldwide, provides us with an opportunity to create a set of rules to
better assist and promote research. Government of India has proposed #AIforAll
- which implies inclusive technology leadership, where the full potential of AI is
realized in pursuance of the country’s unique needs and aspirations.

AI regulation strategy should strive to leverage AI for economic growth, social
development and inclusive growth, for emerging and developing economies.
While AI has the potential to provide large incremental value to a wide range of
sectors, adoption till date has been driven primarily from a commercial
perspective.

Technology disruptions like AI are once-in-a generation phenomenon, and
hence large-scale adoption strategies, especially national strategies, need to
strike a balance between narrow definitions of financial impact and the greater
good.

The possibilities with AI are as wide ranging as helping doctors and scientists
come up with better cancer treatments, to being used as a lethal weapon or
worse, threatening humanity as a whole, as Elon Musk8 and Stephen Hawking9
have forewarned.

8 https://www.theverge.com/2017/7/17/15980954/elon-musk-ai-regulation-existential-
threat, last accessed Oct 20, 2019.
9 https://eandt.theiet.org/content/articles/2018/10/hawking-calls-for-ai-regulation-in-
posthumously-published-essays/, last accessed Oct 21, 2019.

But what happens when an injustice is caused to an individual not by another
person but by a machine or collection of them? With all the benefits that come
bundled with AI, there are many downsides too. Then, in case of harm being
caused to a person, who is to be held responsible? The machine or its creator?

These are some tough questions that jurisdictions around the world have to
answer and come up with solutions for.

ISSUES IN REGULATION

A key issue with technology regulation is that the horizon if this tech is so broad
that we cannot come up with a one-size-fits-it-all guidelines10. Such a regulation
might stifle growth and push us back on the innovation front.

AI tools can be highly complex, which means they require personnel with deep
AI, ML and data science skills. Today, cloud environments have also become
more complex, and require people who can understand the latest trends in AI,
ML and analytics in native cloud environments, in addition to third-party tools.

AI and analytics involve a plethora of servers, storage, networking, integration
and security options, with their associated business and risk implications. Hence,
the guidelines or policy to regulate them must be broad and flexible enough to
facilitate them all.

Trusting a machine requires fairness, transparency in its working mechanism
and accountability. But even researchers cannot come up with a single definition
of fairness: it is always subjective depending on the use and where the AI is to
be deployed to evaluate the impact of bias. Transparency in working of the AI
mechanism, even if put in the simplest of terms would not be understood by the

10 https://www.dxc.technology/analytics/insights/143677-
with_ai_and_analytics_one_size_does_not_fit_all_a_better_and_faster_path_to_busines
s_outcomes, last accessed Oct 20, 2019.

masses. What the governments can set, is the accountability of companies and
programmers if their machines are found to be biased and against the greater
good.

The AI regulation strategy must be aimed at primarily guiding an inevitable wave
of change for quicker and better impact. With the AI ecosystem rapidly evolving
and taking societies into uncharted territory. We need to bridge the digital
divide that is erupting. Data management needs to be done ethically. There has
been tremendous activity concerning AI policy in different countries over the
past couple of years.

LETHAL AUTONOMOUS WEAPONS SYSTEM

This brings us to the most glaring part: regulation of robots and automated
weapons for military use.

The United Nations Interregional Crime and Justice Research Institute (UNICRI)
established a centre on AI and robotics with an aim to “help focus expertise on
Artificial Intelligence (AI) throughout the UN in a single agency.”11 The centre is
situated in the Hague, Netherlands. This centre focusses on “understanding and
addressing the risks and benefits of AI and robotics from the perspective of
crime and security through awareness-raising, education, exchange of
information, and harmonisation of stakeholders.”12 Since its establishment in
2015, UNICRI has developed a large international network of stakeholders with
whom it collaborates, including the INTERPOL 13 , the International

11 AI Policy – United Nations, FUTURE OF LIFE INSTITUTE, https://futureoflife.org/ai-
policy-united-nations/ (last visited Oct 20, 2019)
12 Centre on Artificial Intelligence and Robotics, UNICRI,
http://www.unicri.it/topics/ai_robotics/centre/ (last visited Oct 20, 2019)
13 https://artificialintelligence-news.com/2018/07/17/interpol-ai-impact-crime-policing/,
last accessed Oct 20, 2019

Telecommunications Union (ITU)14, the Institute of Electrical and Electronics
Engineers (IEEE), the Foundation for Responsible Robotics, the World Economic
Forum to name a few.

Efforts to regulate AI and military use of technological advancements are already
underway and bearing fruit. The use of conventional weapons that have affect
civilians indiscriminately have been banned via The Convention on Certain
Conventional Weapons, adopted on Oct 10, 1980 and in effect from Dec 2,
198315.

CCW was adopted and ratified when there wasn’t much debate on increasing
automation and dual-use of technology for civilian and military use. But after
2013, the convention has also discussed LAWS, or the Lethal Autonomous
Weapons System. LAWS are proposed robot fighter16, like never before, which
can identify their target and fire incessantly at them. While such robots do not
exist yet, the technology to train AI to shoot does17.

As countries like China plan to use AI enabled missiles18 in their future military
expedience, Public Policy and Artificial Intelligence researchers called upon the
international community to place a ban on the use of LAWS through an open
letter19.

14 https://digitalcooperation.org/wp-content/uploads/2019/02/International-

Telecommunciations-Union.pdf, last accessed Oct 20, 2019

15 The Convention on Certain Conventional Weapons, THE UNITED NATIONS OFFICE AT

GENEVA (UNOG),

https://www.unog.ch/80256EE600585943/(httpPages)/4F0DEF093B4860B4C125718000

4B1B30?OpenDocument (last visited Oct 20, 2019)

16 https://www.vox.com/2019/6/21/18691459/killer-robots-lethal-autonomous-

weapons-ai-war, last accessed Oct 20, 2019.

17 Ibid

18 https://www.dailystar.com.lb/News/World/2016/Aug-19/367933-china-eyes-artificial-

intelligence-for-new-cruise-missiles.ashx, last accessed Oct 20, 2019.

19 https://futureoflife.org/open-letter-autonomous-weapons/, last accessed Oct 20, 2019.

It is during such times that we need to bring on regulations and guidelines on
the use of such lethal technology. Under the chairmanship of Ambassador
Amandeep Singh Gill of India, a group of Governmental Experts on LAWS met in
Geneva and held meetings in 201720 and 201821 to discuss regulations on LAWS.

Amnesty International on the states to ban automated weapons “before it was
too late” 22 . The 2018 convention ended with nations emphasizing the
importance of retaining human control over weapons systems and the use of
force and expressed support for developing new international law on lethal
autonomous weapons systems.” Twenty-six of these states called for a total ban
(including Austria, Brazil, and Egypt) and China has also called for a “new CCW
protocol to prohibit the use of fully autonomous weapons systems.”

REGULATION AROUND THE WORLD

Governments in USA, UK, France, Japan and China have released their policy and
strategy papers relating to AI. In order to establish a leadership role, it is
important for developing nations like India to take the plunge and start by
releasing a draft paper to initiate the roll out of an ambitious programme that
would ensure its rightful place in this transformational era.

20 2017 Group of Governmental Experts on Lethal Autonomous Weapons Systems (LAWS),

UNOG, https://www.

unog.ch/80256EE600585943/(httpPages)/F027DAA4966EB9C7C12580CD0039D7B5, last

visited Oct 20, 2019.

21 2017 Group of Governmental Experts on Lethal Autonomous Weapons Systems (LAWS),

UNOG, https://www.

unog.ch/80256EE600585943/(httpPages)/F027DAA4966EB9C7C12580CD0039D7B5, last

accessed Oct 20, 2019.

22 https://www.amnesty.org/en/latest/news/2018/08/un-decisive-action-needed-to-ban-

killer-robots-before-its-too-late/, last accessed Oct 20, 2019.

President Trump signed an Executive Order23 on Feb 11, 2019 to ‘Maintain
America’s Leadership in AI’ by sustaining R&D investments, and removing
research barriers, and most importantly to promote an international
environment that is supportive of American AI innovation and its responsible
use.

Need for AI regulation and proper accountability in case of mishap is the need of
the hour because AIs are found to be prone to biases. The data that the AI
algorithms are trained on makes them gender and racial bias in many cases24.
The COMPAS25 software that is widely used in the US criminal system to predict
whether a criminal is more prone to repeat his offence is found to be biased
towards colored people and women.

The worst case scenario26 by Elon Musk is that technology becomes so advanced
and in control of firepower that it threatens human existence. However, Oren
Etzioni counters27 that AI will never be a threat to humanity and calls for a
human-robot partnership.

The US Congress recently introduced the Algorithmic Accountability Act of 201928
to “direct the Federal Trade Commission to require entities that use, store, or
share personal information to conduct automated decision system impact
assessments and data protection impact assessments.” i.e. to make the

23 https://www.federalregister.gov/documents/2019/02/14/2019-02544/maintaining-
american-leadership-in-artificial-intelligence, last accessed Oct 20, 2019.
24 https://medium.com/thoughts-and-reflections/racial-bias-and-gender-bias-examples-
in-ai-systems-7211e4c166a1, last accessed Oct 20, 2019.
25 https://www.propublica.org/article/machine-bias-risk-assessments-in-criminal-
sentencing, last accessed Oct 20, 2019.
26 https://www.wsj.com/articles/elon-musk-warns-nations-governors-of-looming-ai-
threat-calls-for-regulations-1500154345, last accessed Oct 20, 2019.
27 https://www.technologyreview.com/s/602410/no-the-experts-dont-think-
superintelligent-ai-is-a-threat-to-humanity/, last accessed Oct 20, 2019.
28 https://www.congress.gov/bill/116th-congress/house-bill/2231/all-info, last accessed
Oct 27, 2019

companies liable for any bias in their algorithms. Companies which write the
codes are held accountable as they are the ones deemed to be responsible for
coding and giving life to a machine.

FEARS OF REGULATION

Experts are worried that over regulation by legislatures may kill innovation of
scientists and coder.
The basis of any innovation is mistakes.

In fact, the process of learning anything new involves committing mistakes. You
have a hypothesis, you experiment, you conduct tests, you fail, you learn and
then finally after learning from your mistakes, you succeed.

The same applies to any scientific innovation. Scientists innovate and create new
products. However, if they are regulated and coders are threatened with
sanctions if their products are found to be faulty or biased. There are times when
the coders cannot even predict any fault or inherent biasness in the codes they
write. The fault is magnified only when the algorithm is used on a wide user base
which is general the public.

So, if the coders and IT experts are threatened with regulations, they might not
innovate and/or create new products. This would initiate self-regulation by
young researchers and certainly stifle innovation.

A similar conundrum was faced when the internet was in its nascent stage. Then,
the US had avoided29 regulating it heavily because that would have stunted
growth of early businesses and possibly the internet as we know it would not
have come up.
Artificial intelligence systems have the potential to change how humans do just
about everything. Scientists, engineers, programmers and entrepreneurs need
time to develop the technologies – and deliver their benefits. Their work should
be free from concern that some AIs might be banned, and from the delays and
costs associated with new AI-specific regulations.

29 https://www.forbes.com/sites/adamthierer/2012/02/12/15-years-on-president-
clintons-5-principles-for-internet-policy-remain-the-perfect-paradigm/ , last accessed Oct
26, 2019


Click to View FlipBook Version