MODULE 1
ARTIFICIAL INTELLIGENCE - INTRODUCTION
In contemporary times, Artificial Intelligence is not only restricted itself to an
advancing technology but with its capabilities of performing activities and learning
from experience has made it one of the most transformative field. Today, Artificial
Intelligence finds its usage among a plethora of things, right from consumer
appliances, voice-assistants, healthcare, autonomous vehicles etc. In fact, we are
becoming more and more dependent on systems operating with Artificial
Intelligence for the maintenance and functioning of our physical and digital
infrastructure. But what is the most important thing to keep in mind is that, the law
should also advance along with the advancement of technologies. The law should be
updated so that it could be made accountable for these technological
advancements. ARTIFICIAL INTELLIGENCE, whose abilities to learn and operate
autonomously from humans is actually posing lots of challenges to the established
areas of law.1
The revolution of Artificial Intelligence is widespread throughout the society. From
the smart air conditioners that decide the optimum temperature of your room to
the human-computer interactions that allows processing of the natural language
between the humans and computers thereby raising the bar of the free speech
doctrine, the world has engulfed itself into a wide chain of networks and with such
kind of extensive networking even the internet starts to depend on Artificial
Intelligence to function.
1 Woodrow Barfield, “Research Handbook on the law of Artificial Intelligence”, p. 1.
Artificial Intelligence based softwares are also extensively used for content selection
on user preference, showing advertisements2 targeting a particular class of people,
regulating news feed on social medias etc. Don’t you remember of the times when
you see advertisements of products across various platforms based on a google
search which you might have done? Well, thank the Artificial Intelligence in that
case. Other side of the coin is that with such kind of a use of Artificial Intelligence
and the integration of the same in the smart machines which may result in
unpredictable behavior at times, it offers significant challenges to the established
areas of law as the law has always relied on humans in the decision-making loop.
The Merriam-Webster dictionary defines intelligence as “the ability to learn or
understand things or to deal with new or difficult situations.” Generally, the
objective of Artificial Intelligence is to create computers, software, and machines
that are capable of Intelligent and, in some cases, unpredicted and creative
behavior. The Organisation for Economic Co-Operation and Development (OECD) in
its principles has defined an ‘Artificial Intelligence System’. The definition states that
“Artificial Intelligence system is a machine-based system that can, for a given set of
human-defined objectives, make predictions, recommendations, or decisions
influencing real or virtual environments.”
Artificial Intelligence systems are designed to operate with varying levels of
autonomy. The basic ingredient of Artificial Intelligence is algorithms. In plain words,
algorithms can be described as a procedure for solving a problem in a finite number
of steps, or as stated by Microsoft’s Tarleton Gillespie, algorithms are “encoded
2 Stephen F. Deangelis, “Artificial Intelligence: How Algorithms Make Systems Smart”, available at:
https://www.wired.com/insights/2014/09/artificial-intelligence-algorithms-2/.
procedures of transforming input data into a determined output, based on specified
calculations.3
THE ISSUE OF LIABILITY
The use of Artificial Intelligence to create the range of behaviors shown by emerging
smart technologies is core to the discussion of Artificial Intelligence as a
transformative and disruptive technology.4 The way in which Artificial Intelligence
solves a problem can be completely new to humans and because of the said reason,
the liability and interest of Artificial Intelligence raises several questions in different
areas of law.
Consider a computer system which functions on algorithms and is controlled by
Artificial Intelligence, the activities it carries around cannot be predicted by humans.
In such a situation, with whom shall the liability rest if the Artificial Intelligence
causes harm to a human or damage to property? The Artificial Intelligence which
directed the actions of the machine that caused the damage but lacks personhood
status, or the human being who lacks knowledge of how the machine performed or
whether if the machine was actually trying to solve a particular problem?
Another aspect of Artificial Intelligence can also be witnessed in terms of the grant
of Intellectual Property Rights. The granting of such rights to Artificial Intelligence is
still debatable! Whether copyright can be given to an Artificial Intelligence for
authorship of a work created by algorithm? Or even in case of a Patent, where the
3 Tarleton Gillespie, “The Relevance of Algorithms”, available at:
https://mitpress.universitypressscholarship.com/view/10.7551/mitpress/9780262525374.001.0001/upso-
9780262525374-chapter-9/.
4 Supra note 1.
invention was independently created by an algorithm that itself was derived from
machine learning techniques?
Machines lack the ability to think beyond a few simple rules directing their actions,
are so devoid of Intelligence. In the case of Comptroller of the Treasury v. Family
Entertainment Centers5, the Special Appeals Court considered whether life-sized
animatronic puppets that danced and sang at a Chuckie Cheese triggered a state tax
on an establishment which serves food “where there is a furnished performance.”
The Court held that “pre-programmed robot can perform menial tasks because a
pre-programmed robot has no ‘skill’. It cannot ‘perform’ a piece of music.6
However, if we have such kinds of robots which have capabilities to sense the
environment, process it and carry out a particular kind of action on the basis of
algorithms on which they function, then it may result in solutions which may be
unknown to humans. Therefore, we can infer that the more human like the Artificial
Intelligence becomes, the more the law is challenged.7 Therefore, the basic question
that arises is that who should be made liable in the circumstances where Artificial
Intelligence solves a problem in a way which is completely unknown to the human in
the system. Whether a person who created the algorithm or who made the machine
be held liable or humans be made liable if these artificially intelligent systems can
write their own algorithms and solve problems with such kind of solutions that are
unknown to the humans? Hence, in the coming times, there can be a significant
dispute between the law and policy to figure out liability, but there has to be a body
of law to guide the Courts in deciding disputes and in particularly making a decision
to allocate the liability between humans and artificially intelligent machines.
5 519 A.2d 1337, 1338(Md. 1987).
6 Ibid.
7 Supra note 1 at p 5.
The Court in the cases where harm is alleged to have been caused by Artificial
Intelligence, has often asked to investigate the novel technology and apply
precedents in the form of case-laws, many of which does not even fit in the
circumstances, to make the determinations on the liability. For example, common-
law tort and malpractice claims often center on the very human concepts of fault,
negligence, knowledge, intent, and reasonableness. 8
The main question which seeks our attention now is that how can the liabilities be
assigned in the case when human judgement is replaced by an artificial intelligence?
In the case of United States v. Athlone Indus, Inc.9 the court ruled that “robots
cannot be sued”, and instead discussed that how the manufacturer of a defective
robotic machine is liable for civil penalties for the machine’s defects. However, it is
also necessary for us to keep in our minds that robots and artificial intelligence have
become far more sophisticated and autonomous since this Judgment.
In a developing country like India, Artificial Intelligence holds out the promise of new
breakthroughs in medical research and Big Data generates more calibrated searches
and allows quicker detection of crimes.10 The use of Artificial Intelligence in the
health industry in India is well documented. Manipal Hospital Group has partnered
with IBM’s Watson for Oncology for the diagnosis and treatment of seven types of
Cancer, while in the context of pharmaceuticals, Artificial Intelligence software is
being used for scanning through all available academic literature for tasks such as
8 John Frank Weaver, We need to Pass Legislation on Artificial Intelligence Early and Often, The Citizen’s Guide to
Future (2014), available at:
https://www.slate.com/blogs/future_tense/2014/09/12/we_need_to_pass_artificial_intelligence_laws_early_and_oft
en.
9 746 F.29 977 (3d Cir. 1984).
10 Rohan George, Predictive Policing: What is it, How it works, and its Legal Implications, The Centre for Internet and
Society, India, available at: https://cis-india.org/internet-governance/blog/predictive-policing-what-is-it-how-it-
works-and-it-legal-implications.
molecule discovery.11 Needless to say but the amount of data gathered and its
analysis will have an immense benefits for the citizens.
DATA PROTECTION AND ARTIFICIAL INTELLIGENCE
The development of Artificial Intelligence has been rampant in the recent years and
in the present times, Artificial Intelligence is being used as a toll in both private and
the public sector organizations around the globe. Along with these technological
innovations and its implementations, there are other important aspects, one of
them being the tension between AI and Data Protection.
A Long-Term approach needs to be made to see the challenges and check the
current approach for being outdated and ineffective with respect to Data-
Protection. Therefore, with the technological advancements in AI, we have both an
opportunity and obligation to examine the effectiveness of current data protection
laws and practices that protect privacy effectively in an era of AI.12
While some scholars have argued that AI poses a threat to data protection, others
feel that AI can help in offering opportunities to further strengthen it. For example,
AI can help companies limit or monitor who is looking at an individual’s data and
respond in real-time to prevent inappropriate use or theft of data. Companies are
also in process of developing ‘privacy bots’, which remembers the user’s privacy
preferences and try to make them consistent across various sites.
11 Artificial Intelligence in the Healthcare Industry in India, The Centre for Internet and Society, India, available at:
<https://cis-india.org/internet-governance/files/ai-and-healtchare-report.
12 Fred H. Cate & Rachael Dockery, Artificial Intelligence and Data Protection: Observations on a Growing Conflict, p.2.
‘Polisis’, which stand for “privacy policy analysis” is an AI that uses machine learning
to “read a privacy policy it’s never seen before and extract a readable summary,
displayed in a graphical flow chart, of what kind of data a service/company collects,
the sharing of that data and whether a user can opt out of that collection or
sharing.”13
AI are also being used to caution users of suspicious websites, advertisements and
other malicious activities. Hence, AI is helping companies to develop different
technologies that are more protective of users’ privacy.
While data protection laws and regulations attempt to protect sensitive data and
similar variables, AI algorithms need to include such data in the analysis to ensure
accurate and fair results. For example, when predicting the likelihood of death in
pneumonia patients, researchers at Microsoft discovered that a history of asthma
resulted in a lower risk of death, likely because these individuals are likely to seek
earlier treatment. Because those protected variables were left in the model, it was
easier for researchers to account for them.14
Most Data Protection laws require that there be a lawful basis for collecting data.
Under the General Data Protection and Regulation (GDPR), the lawful basis for
collecting and sharing personal data are consent, contractual performance, legal
obligation, public interests, or legitimate interests.15 Therefore, the fundamental
question that arises here is that how can the organizations give confidence to data
protection authorities that they have considered a lawful basis for processing while
still allowing flexibility to AI models. As the Norwegian Data Protection Authority
13 Greenberg A., “An AI that reads Privacy Policies so that you don’t have to”, available at
https://www.wired.com/story/polisis-ai-reads-privacy-policies-so-you-dont-have-to/.
14 Supra note 12 at p. 30.
15 Art. 6(1), General Data Protection and Regulation.
explained, “Most applications of artificial intelligence require huge volumes of data
in order to learn and make intelligent decisions.”16
A lot of regulators, attorneys, businesses and academicians are working to find a
middle way to address the challenges imposed by Artificial Intelligence to Data
Protection. However, the tension between Artificial Intelligence and Data Protection
is so great and so fundamental that efforts to reconcile them risks of weakening the
data protection on interfering with the benefits of AI. Neither result is desirable
given the importance of AI and that of privacy.
TOWARDS A LAW OF ARTIFICIAL INTELLIGENCE
To begin with, as a policy for artificial intelligence, the goal of government
regulators should be to draft legislation that does not restrict the research on AI, but
still protects the public from possible dangers when artificial intelligence approaches
and then exceeds human level of intelligence. For “smart machines” and other
emerging technologies which are controlled by artificial intelligence, and which
perform tasks that require computationally intensive algorithms and sophisticated
analytical techniques, current legal doctrines are getting challenged as these
systems engage in creative and unpredictable behavior.17
The technology which is driving the artificial intelligence revolution and is posing
challenges to the current legal doctrine are the analytical techniques and algorithms
which gives machines the capability to go beyond their original programming and to
operate autonomously from humans.18
16 Datatilsynet (Norwegian Data Protection Authority), Artificial Intelligence and Privacy, available at
https://www.datatilsynet.no/globalassests/global/english/ai-and-privacy.pdf.
17 Ryan Calo, Robotics and the Lessons of Cyberlaw, 103 Cal. L. Rev., (2015).
18 Ethem Alpaydin, Machine Learning: The New AI (MIT Press 2016).
Hence, the “techniques” of artificial intelligence- the algorithms and sophisticated
techniques, should be the focus of the law on artificial intelligence as opposed to the
manifestation of artificial intelligence in a particular physical form. The primary
reason why we are saying so is that artificial intelligence doesn’t need a body either
to exist or to act in this physical world. Artificial intelligence also controls digital
entities and therefore, if a law which only focusses on “smart machines” will never
be able to adequately cover the full range of technologies controlled by artificial
intelligence which are entering the society.
The ability of artificial intelligence to act autonomously from humans, to engage in
creative problem solving and as noted above, to exist either as a physical or digital
entity raises specific questions of law. For example, if today anyone creates their
“virtual avatar” which operates autonomously, will that qualify to act as their agent?
Would the product liability law applies to algorithms and software?
Assigning of liability when an artificially intelligent entity harms a human or damages
property is, of course, an important issue to discuss. Some Professors have rightly
observed that complex autonomous systems will present a problem for classic fault-
based legal schemes like law of torts because intelligent systems have the potential
to behave in unpredictable ways.19
How can people who build and deploy automated and intelligent systems be said to
be at fault when they could not have reasonably anticipated the behaviour and thus
the risk, of an automated intelligent system? Given the lack of ‘legal person’ status
19 William D. Smart, Cindy M. Grimm, and Woodrow Hartzog, An Education Theory of Fault for Autonomous Systems,
Yale University Press (2017).
for artificial intelligence, the principle of strict liability which holds producers liable
for harm regardless of fault might be an approach that could be considered.20
The White House in the U.S released a report21 on the future of artificial intelligence
which offers several recommendations for how to regulate this technology. It
mentions how the data used to train artificial intelligence may influence what it
learns, and how it responds, hence, federal agencies should prioritise the creation of
open training data and open data standards in artificial intelligence. A potential step
in this area could be to include an “Open Data for AI” with the objective of releasing
a significant number of government datasets, in addition to the publicly available
datasets by some of the Tech Giants like Techmaster-122 by Google, self-driving
datasets by Lyft and Waymo23, among others, to accelerate artificial intelligence
research and to encourage the use of open data standards and best practices across
government, academia, and private sector. Secondly, government agencies should
draw an appropriate technical expertise when setting regulatory policy for artificially
intelligent systems. The nature of artificial intelligence technique is very complex
and therefore, domain expertise will be crucial for informing legislators of the scope
and capabilities of artificial intelligence.
20 Michael Guihot, Anne Matthew, and Nicolas Suzor, Nudging Robots: Innovative Solutions to Regulate Artificial
Intelligence, (2017).
21 Artificial Intelligence, Automation, and the Economy, Executive Office of the President, 2016, available at:
https://www.whitehouse.gov/sites/whitehouse.gov/files/images/EMBARGOED%20AI%20Economy%20Report.pdf.
22 https://research.google/pubs/pub48484/
23 After Lyft, Waymo Open Sources Self-Driving Dataset To The Public, accessible at https://analyticsindiamag.com/after-lyft-
waymo-open-sources-self-driving-dataset-to-the-public/, last accessed Feb 20, 2020