The words you are searching are inside this book. To get more targeted content, please make full-text search by clicking here.
Discover the best professional documents and content resources in AnyFlip Document Base.
Published by Enhelion, 2021-11-09 01:36:34






We are a privileged generation to live in this era full of technological
advancements. Gone are the days when almost everything was done manually,
and now we live in the time where a lot of work is taken over by machines,
software, and various automatic processes. In this regard, artificial intelligence
has a special place in all the advancement made today. Artificial intelligence or
AI is nothing but the science of computers and machines developing
intelligence like humans. The Merriam-Webster dictionary defines intelligence
as “the ability to learn or understand things or to deal with new or difficult
situations.” Generally, the objective of Artificial Intelligence is to create
computers, software, and machines that are capable of Intelligent and, in some
cases, unpredicted and creative behavior. The Organisation for Economic Co-
Operation and Development (OECD) in its principles has defined an ‘Artificial
Intelligence System’. The definition states that “Artificial Intelligence system is
a machine-based system that can, for a given set of human-defined objectives,
make predictions, recommendations, or decisions influencing real or virtual

In this technology, the machines are able to do some of the simple to complex
stuff that humans need to do on a regular basis. As the AI systems are used on a
day to day basis in our daily life, it is not wrong to say that our lives have also
become advanced with the use of this technology. Consequentially, AI has

become quintessential to our routines. Today, Artificial Intelligence finds its
usage among a plethora of things, right from consumer appliances, voice-
assistants, healthcare, autonomous vehicles etc. In fact, we are becoming more
and more dependent on systems operating with Artificial Intelligence for the
maintenance and functioning of our physical and digital infrastructure. But
what is the most important thing to keep in mind is that, the law should also
advance along with the advancement of technologies. The law should be
updated so that it could accommodate technological advancements and their
legal fallouts. ARTIFICIAL INTELLIGENCE, whose abilities to learn and
operate autonomously from humans is actually posing lots of challenges to the
established areas of law.1 These exigencies, must be catered by adaptation,
improvisation and expansion of existing legal architecture. Artificial
Intelligence systems are designed to operate with varying levels of autonomy.
The basic ingredient of Artificial Intelligence is algorithms. In plain words,
algorithms can be described as a procedure for solving a problem in a finite
number of steps. As stated by Microsoft’s Tarleton Gillespie, algorithms are
“encoded procedures of transforming input data into a determined output, based
on specified calculations.2

The revolution of Artificial Intelligence is widespread throughout the society.
Think of the smart air conditioners that decide the optimum temperature of your
room. Should human-computer interactions that allows processing of the
natural language between the humans and computers be considered as exercise
of freedom of speech? This is one of the many questions AI and its usage pose.
The world has engulfed itself into a wide chain of networks and with such kind

1Woodrow Barfield, “Research Handbook on the law of Artificial Intelligence”, p. 1.
2Tarleton Gillespie, “The Relevance of Algorithms”, available at: https://mitpress.universitypressscholarship.
com/view/10.7551/mitpress/9780262525374.001.0001/upso-9780262525374-chapter-9/, last accessed Oct 14, 2019.

of extensive networking, even the internet starts to depend on Artificial
Intelligence to function.

Artificial Intelligence based softwares are also extensively used for content
selection on user preference, showing advertisements3 , targeting a particular
class of people, regulating news feed on social medias etc. Remember when
you see advertisements of products across various platforms based on
something you looked up on Google? Well, thank the Artificial Intelligence in
that case.

NITI Aayog, the policy think-tank of the Government of India, has released a
discussion paper on ‘National Strategy for Artificial Intelligence’. This
represents a significant first step towards the regulation of AI in India.
However, as is often the case with fast-paced innovation, the regulatory system
is seldom able to keep pace with the developments. Thus, the need of the hour
is to develop a regulatory framework to comprehensively address the numerous
legal issues pertaining to AI. In this article, we attempt to highlight a few of
such legal issues plaguing the AI industry.


The use of Artificial Intelligence to create the range of behaviors shown by
emerging smart technologies is core to the discussion of Artificial Intelligence
as a transformative and disruptive technology.4 The way in which Artificial
Intelligence solves a problem can be completely new to humans and because of

3Stephen F. Deangelis, “Artificial Intelligence: How Algorithms Make Systems Smart”, available at: https://www.wired
.com/insights/2014/09/artificial-intelligence-algorithms-2/, last accessed Oct 14, 2019.
4Supra note 1.

the said reason, the liability and interest of Artificial Intelligence raises several
questions in different areas of law.

Consider a computer system which functions on algorithms and is controlled by
Artificial Intelligence, the activities it carries around cannot be predicted by
humans. In such a situation, with whom shall the liability rest if the Artificial
Intelligence causes harm to a human or damage to property? The Artificial
Intelligence which directed the actions of the machine that caused the damage
but lacks personhood status, or the human being who lacks knowledge of how
the machine performed or whether if the machine was actually trying to solve a
particular problem?

Another aspect of Artificial Intelligence can also be witnessed in terms of the
grant of Intellectual Property Rights. The granting of such rights to Artificial
Intelligence is still debatable! Whether copyright can be given to an Artificial
Intelligence for authorship of a work created by algorithm? Or even in case of a
Patent, where the invention was independently created by an algorithm that
itself was derived from machine learning techniques?

Machines lack the ability to think beyond a few simple rules directing their
actions, are so devoid of Intelligence. In the case of Comptroller of the
Treasury v. Family Entertainment Centers 5 , the Special Appeals Court
considered whether life-sized animatronic puppets that danced and sang at a
Chuckie Cheese triggered a state tax on an establishment which serves food
“where there is a furnished performance.” The Court held that “pre-

5519 A.2d 1337, 1338(Md. 1987).

programmed robot can perform menial tasks because a pre-programmed robot
has no ‘skill’. It cannot ‘perform’ a piece of music.6

However, if we have such kinds of robots which have capabilities to sense the
environment, process it and carry out a particular kind of action on the basis of
algorithms on which they function, then it may result in solutions which may be
unknown to humans. Therefore, we can infer that the more human like the
Artificial Intelligence becomes, the more the law is challenged.7 Therefore, the
basic question that arises is that who should be made liable in the circumstances
where Artificial Intelligence solves a problem in a way which is completely
unknown to the human in the system? Whether a person who created the
algorithm or made the machine can be held liable and should humans be made
liable if these artificially intelligent systems can write their own algorithms and
solve problems with such kind of solutions that are unknown to the humans?
Hence, in the coming times, there can be a significant dispute between the law
and policy to figure out liability, but there has to be a body of law to guide the
courts in deciding disputes and in particularly making a decision to allocate the
liability between humans and artificially intelligent machines.

The Court in the cases where harm is alleged to have been caused by Artificial
Intelligence, has often asked to investigate the novel technology and apply
precedents in the form of case-laws, many of which does not even fit in the
circumstances, to make the determinations on the liability. For example,
common-law tort and malpractice claims often center on the very human
concepts of fault, negligence, knowledge, intent, and reasonableness.8

7Supra note 1 at p 5.
8 John Frank Weaver, We need to Pass Legislation on Artificial Intelligence Early and Often, The Citizen’s Guide to
Future (2014), available at:

The main question which seeks our attention now is that how can the liabilities
be assigned in the case when human judgement is replaced by an artificial
intelligence? In the case of United States v. Athlone Indus, Inc.9the court ruled
that “robots cannot be sued”, and instead discussed that how the manufacturer
of a defective robotic machine is liable for civil penalties for the machine’s
defects. However, it is also necessary for us to keep in our minds that robots
and artificial intelligence have become far more sophisticated and autonomous
since this Judgment.

In a developing country like India, Artificial Intelligence holds out the promise
of new breakthroughs in medical research and Big Data generates more
calibrated searches and allows quicker detection of crimes. 10 The use of
Artificial Intelligence in the health industry in India is well documented.
Manipal Hospital Group has partnered with IBM’s Watson for Oncology for the
diagnosis and treatment of seven types of Cancer, while in the context of
pharmaceuticals, Artificial Intelligence software is being used for scanning
through all available academic literature for tasks such as molecule
discovery.11Needless to say but the amount of data gathered and its analysis will
have an immense benefits for the citizens.


The development of Artificial Intelligence has been rampant in the recent years
and in the present times, Artificial Intelligence is being used as a toll in both
private and the public sector organizations around the globe. Along with these

intelligence_laws_early_and_often, last accessed Oct 14, 2019.
9 746 F.29 977 (3d Cir. 1984).
10 Rohan George, Predictive Policing: What is it, How it works, and its Legal Implications, The Centre for Internet and
Society, India, available at:
and-it-legal-implications, last accessed on Oct 10, 2019.
11 Artificial Intelligence in the Healthcare Industry in India, The Centre for Internet and Society, India, available at:
<, last accessed Oct 10, 2019.

technological innovations and its implementations, there are other important
aspects, one of them being the tension between AI and Data Protection.

A Long-Term approach needs to be made to see the challenges and check the
current approach for being outdated and ineffective with respect to Data-
Protection. Therefore, with the technological advancements in AI, we have both
an opportunity and obligation to examine the effectiveness of current data
protection laws and practices that protect privacy effectively in an era of AI.12

While some scholars have argued that AI poses a threat to data protection,
others feel that AI can help in offering opportunities to further strengthen it. For
example, AI can help companies limit or monitor who is looking at an
individual’s data and respond in real-time to prevent inappropriate use or theft
of data. Companies are also in process of developing ‘privacy bots’, which
remembers the user’s privacy preferences and try to make them consistent
across various sites.

‘Polisis’, which stand for “privacy policy analysis” is an AI that uses machine
learning to “read a privacy policy it’s never seen before and extract a readable
summary, displayed in a graphical flow chart, of what kind of data a
service/company collects, the sharing of that data and whether a user can opt
out of that collection or sharing.”13

AI are also being used to caution users of suspicious websites, advertisements
and other malicious activities. Hence, AI is helping companies to develop
different technologies that are more protective of users’ privacy.

12 Fred H. Cate & Rachael Dockery, Artificial Intelligence and Data Protection: Observations on a Growing Conflict,
13 Greenberg A., “An AI that reads Privacy Policies so that you don’t have to”, available at
/story/polisis-ai-reads-privacy-policies-so-you-dont-have-to/, last accessed Oct 10, 2019.

While data protection laws and regulations attempt to protect sensitive data and
similar variables, AI algorithms need to include such data in the analysis to
ensure accurate and fair results. For example, when predicting the likelihood of
death in pneumonia patients, researchers at Microsoft discovered that a history
of asthma resulted in a lower risk of death, likely because these individuals are
likely to seek earlier treatment. Because those protected variables were left in
the model, it was easier for researchers to account for them.14

Can a chatbot or chat robot (applie’s siri/ amzon’s aleska ) be liable, if
commits error with personalized data of person? - AI in the form of
chatbots interacts with customers on websites. These chatbots can follow a
scripted text through machine learning (ML) and increased interaction
deviate from the standard questions to provide a more human-like
interaction. In the course of communicating with the chatbot, if a person
was to reveal sensitive personal information for any reason whatsoever, what
happens to this data? In the case of an ML chatbot which does not work as per a
scripted text and has collected sensitive personal information, who shall be
responsible? If Rule 5(3) of IT (Reasonable Security Practices and procedures
and sensitive personal data or information) Rules, 2011, is breached- The
obvious answer would be the company shall be responsible because the rules
state that “The body corporate or any person who on behalf of the body
corporate…” collects information. However, could the company avoid liability
by claiming that it was not aware that the chatbot, due to its AI ability of
machine learning, had collected sensitive and personal information?

Most Data Protection laws require that there be a lawful basis for collecting
data. Under the General Data Protection and Regulation (GDPR), the lawful

14Supra note 12 at p. 30.

basis for collecting and sharing personal data are consent, contractual
performance, legal obligation, public interests, or legitimate interests. 15
Therefore, the fundamental question that arises here is that how can the
organizations give confidence to data protection authorities that they have
considered a lawful basis for processing while still allowing flexibility to AI
models. As the Norwegian Data Protection Authority explained, “Most
applications of artificial intelligence require huge volumes of data in order to
learn and make intelligent decisions.”16

Just like the GDPR in the EU, the recent Personal Data Protection Bill 2018
(Data Privacy Bill) in India, intends to make organizations accountable for the
personal data processed and stored by them. For instance, the Data Privacy Bill
has expanded the applicability of processing-related requirements by importing
a wider definition to ‘personal data’. Moreover, the Data Privacy Bill gives the
data principal (i.e. the person whose data is collected) the right, inter alia, to
have his information erased. Such requirements are bound to pose challenges
for Big Data.

A lot of regulators, attorneys, businesses and academicians are working to find
a middle way to address the challenges imposed by Artificial Intelligence to
Data Protection. However, the tension between Artificial Intelligence and Data
Protection is so great and so fundamental that efforts to reconcile them risks of
weakening the data protection on interfering with the benefits of AI. Neither
result is desirable given the importance of AI and that of privacy.

15Art. 6(1), General Data Protection and Regulation.
16Datatilsynet (Norwegian Data Protection Authority), Artificial Intelligence and Privacy, available at https://www., last accessed Oct 10, 2019.


The Constitution of India is the basic legal framework which allocates rights
and obligations to persons or citizens. Unfortunately, Courts are yet to
adjudicate upon the legal status of AI machines, the determination of which
would clear up the existing debate of the applicability of existing laws to AI

However, the Ministry of Industry and Commerce in India, whilst recognizing
the relevance of AI to the nation as a whole and to highlight and address the
challenges and concerns AI based technologies and systems and with the
intention to facilitate growth and development of such systems in India, the
Ministry of Industry and Commerce had constituted an 18 member task force,
comprising of experts, academics, researchers and industry leaders, along with
the active participation of governmental bodies / ministries such as NITI
Aayog, Ministry of Electronics and Information Technology, Department of
Science & Technology, UIDAI and DRDO in August 2017, titled “Task force
on AI for India’s Economic Transformation”, chaired by V. Kamakoti, a
professor at IIT Madras to explore possibilities to leverage AI for development
across various fields.

The task force has recently published its report17, wherein it has provided
detailed recommendations along with next steps, to the Ministry of Commerce
with regard to the formulation of a detailed policy on AI in India. The key
takeaways from the report are-

accessed on May 19, 2020.

(A) The report has identified ten specific domains in the report that are
relevant to India from the perspective of development of AI based
technologies, namely (i) Manufacturing; (ii) Fin-tech; (iii) Health; (iv)
Agriculture; (v) Technology for the differently abled; (vi) National
Security; (vii) Environment; (viii) Public utility services; (ix) Retail and
customer relationships; and (x) Education.

(B) The report has identified the following major challenges in deploying AI
systems on a large scale basis in India, (i) Encouraging data collection,
archiving and availability with adequate safeguards, possibly via data
marketplaces / exchanges; (ii) Ensuring data security, protection, privacy
and ethical via regulatory and technological frameworks; (iii) Digitization
of systems and processes with IOT systems whilst providing adequate
protection from cyber-attacks; and (iv) Deployment of autonomous
products whilst ensuring that the impact on employment and safety is

(C) The task force has recommended to set up and fund an “Inter –
Ministerial National Artificial Intelligence Mission”, for a period of 5
years, with funding of around INR 1200 Crores, to act as a nodal agency
to co-ordinate all AI related activities in India: The mission should
engage itself in three broad areas, namely, (i) Core Activities – bring
together relevant industry players and academicians to set up a repository
of research for AI related activities and to fund national level studies and
campaigns to identify AI based projects to be undertaken in each of the
domains identified in the report and to spread awareness amongst the
society on AI systems; (ii) Co-ordination – co-ordination amongst the
relevant ministries / bodies of the government to implement national level
projects to expand the use of AI systems in India; Provided upon request

only (iii) Centers of Excellence – set up inter disciplinary centers of
research to facilitate deeper understanding of AI systems, establish a
universal and generic testing mechanism / procedure such as for testing
the performance of AI systems, such as regulatory sandboxes for
technology relevant to India, fund an inter disciplinary data integration
center to develop an autonomous AI machine that can work on multiple
data streams and provide information to the public across all the domains
identified in the report.

(D)In addition the report states that the Ministry of Commerce and Industry
should create a data-ombudsman, similar to the banking and insurance
industry to quickly address data related issues and grievances.

(E) Standards: The report proposes that the Bureau of Indian Standards
(“BIS”) should take the lead in ensuring that India proactively participates
in and implements the standards and norms being discussed
internationally with regard to AI systems.

(F) Enabling Policies: The task force has recommended that the policies are
enacted that foster the development of AI systems, and has stated that two
specific policies be enacted at the earliest, namely, (i) Policy dealing with
data, which deals with ownership, sharing rights and usage of data – The
report suggests that MeitY and the DIPP drive the effort to bring about
this policy; and (ii) Tax –incentives for income from AI technologies and
applications – The report suggests that MeitY and the Finance Ministry
collaborate to drive this policy and fix incentives for socially relevant
projects that utilizing AI systems / technology.

(G)Human Resource Development: The report proposes that an education
curriculum and strategy is put in place to develop adequate human
resources with the required skill sets to meet the growing demands for
professionals who can handle AI systems. The report suggests that the
Ministry of Human Resource Development and the Ministry of Skill
Development and Entrepreneurship drive this effort.

(H)Bilateral Co-operation and International Rule Making: The report
proposes that inter-ministerial collaborations are set up / constituted, to
ensure that India actively participates in discussions and meeting centered
on AI in international forums. Additionally, the report also suggests that
the government should leverage key bilateral partnerships with other
nations to inculcate and encourage mutual discussions and exchange of
knowledge and information pertaining to AI and regulations in relation to
AI. While the recommendations provided by the task force are well
thought out and seem to be along the lines of encouraging the growth and
assimilation of AI based technologies and systems in India, we will have
to wait to see if there is any concrete action undertaken in India, based on
these recommendations.


Section 43-A of the IT Act, 2000 mandates following of ‘reasonable security
practices and procedures’ in relation to the Information Technology
(Reasonable security practices and procedures and sensitive personal data or
information) Rules, 2011 (“SPDI Rules”) which was enacted on 13 April 2011.
The section per se primarily concentrates on the compensation for negligence in
implementing and maintaining ‘reasonable security practices and procedures’ in
relation to ‘sensitive personal data or information. The criteria as to what would

constitute Sensitive personal data or information of a person is provided under
Rule 318. Information that is freely available or accessible in public domain or
furnished under the RTI Act cannot be categorized under the same. Under the
Rules, if it is for a lawful purpose, a body corporate is required to obtain prior
consent from the information provider regarding the purpose of usage of the
information collected. The body corporate is also mandated to take reasonable
steps to ensure that the information provider has knowledge about the collection
of information, the purpose of collection of such information, the intended
recipients and the name and address of the agency collecting and retaining the

The body corporate has to allow the information provider the right to review or
amend the SPDI and give the information provider an option to retract consent
at any point of time, in relation to the information that has been so provided. In
case of withdrawal of consent, the body corporate has the option to not provide
the goods or services for which the concerned information was sought.
However, there have been several questions that have arisen with regard to the
effectiveness of the SPDI Rules recently, due to the fact that the compliances
set out under the SPDI Rules were restricted only to certain kinds of
information and there is no protection as such for information that does not fall
under the definition of SPDI.

18 Rule 3 of the Information Technology (Reasonable security practices and procedures and sensitive personal data or
information) Rules, 2011.
19 Rule 5 of the Information Technology (Reasonable security practices and procedures and sensitive personal data or
information) Rules, 2011.


The validity of contracts formed through electronic means in India can be
derived from Section 10 A of the IT Act. Electronic contracts are treated like
ordinary paper contracts, provided they satisfy all the essential conditions in the
enforcement of a valid contract such as offer, acceptance, consideration, etc.
The IT Act also recognizes “digital signatures” or “electronic signatures” and
validation of the authentication of electronic records by using such digital/
electronic signatures. The contents of electronic records can also be proved in
evidence by the parties in accordance with the provisions of the Indian
Evidence Act, 1872.With the advent of smart contracts i.e. contracts capable of
enforcing a contract on their own, an additional debate has arisen with regard to
enforceability against an AI and it is to be determined how this issue will be
resolved. It will not always be possible for such contracts to capture all the
relevant information from the real world to adequately assess the situation. The
contract will enforce the terms on the basis of its programming which may be
inadequate and may cause harm / damage to a party. In such an instance, an
aggrieved party may face practical difficulties in enforcing the same in a
different country. In addition, with the growth and development of AI and
robotics, the possibility of an AI entering into a contract of its own volition has
become more prominent. To assess as to whether such a contract may be
considered to be valid in India, reference has to be made to the Indian Contract
Act, 1872, to determine as to whether an AI would be regarded to a person
competent to enter into a contract along with determining if the specific
essentials of a valid contract such as offer, acceptance, consideration, etc., are
being satisfied. As the Indian Contract Act, 1872 envisages that only a “legal
person” may be competent to enter into a valid contract and as the general rule /
practice thus far has been that since robots or machines cannot qualify as

natural or legal persons, a contract entered into by an AI of its own volition /
accord, may not be regarded to be a valid contract under applicable law in
India. Practical concerns such as court’s ability to understand the terms that has
been agreed to will also arise as these terms will be expressed in programming
terms that the court may not be acquainted with. The courts will also need to
make an assessment whether the terms that has been agreed to have been
properly instructed to the AI. Another major concern with regard to AI is lack
of a conscience. A contract to kill can be enforced by a smart contract in which
funds are released to the shooter provided he feeds in the proof of death via
some biotechnology based contraption. It needs to be ensured that such
technology standards are developed and put in place that prevents enforcement
of similar contracts.


To begin with, as a policy for artificial intelligence, the goal of government
regulators should be to draft legislation that does not restrict the research on AI,
but still protects the public from possible dangers when artificial intelligence
approaches and then exceeds human level of intelligence. For “smart machines”
and other emerging technologies which are controlled by artificial intelligence,
and which perform tasks that require computationally intensive algorithms and
sophisticated analytical techniques, current legal doctrines are getting
challenged as these systems engage in creative and unpredictable behavior.20

The technology which is driving the artificial intelligence revolution and is
posing challenges to the current legal doctrine are the analytical techniques and

20 Ryan Calo, Robotics and the Lessons of Cyberlaw, 103 Cal. L. Rev., (2015).

algorithms which gives machines the capability to go beyond their original
programming and to operate autonomously from humans.21

Hence, the “techniques” of artificial intelligence- the algorithms and
sophisticated techniques, should be the focus of the law on artificial intelligence
as opposed to the manifestation of artificial intelligence in a particular physical
form. The primary reason why we are saying so is that artificial intelligence
doesn’t need a body either to exist or to act in this physical world. Artificial
intelligence also controls digital entities and therefore, if a law which only
focusses on “smart machines” will never be able to adequately cover the full
range of technologies controlled by artificial intelligence which are entering the

The ability of artificial intelligence to act autonomously from humans, to
engage in creative problem solving and as noted above, to exist either as a
physical or digital entity raises specific questions of law. For example, if today
anyone creates their “virtual avatar” which operates autonomously, will that
qualify to act as their agent? Would the product liability law applies to
algorithms and software?

Assigning of liability when an artificially intelligent entity harms a human or
damages property is, of course, an important issue to discuss. Some Professors
have rightly observed that complex autonomous systems will present a problem
for classic fault-based legal schemes like law of torts because intelligent
systems have the potential to behave in unpredictable ways.22

21EthemAlpaydin, Machine Learning: The New AI (MIT Press 2016).
22 William D. Smart, Cindy M. Grimm, and Woodrow Hartzog, An Education Theory of Fault for Autonomous Systems,
Yale University Press (2017).

How can people who build and deploy automated and intelligent systems be
said to be at fault when they could not have reasonably anticipated the
behaviour and thus the risk, of an automated intelligent system? Given the lack
of ‘legal person’ status for artificial intelligence, the principle of strict liability
which holds producers liable for harm regardless of fault might be an approach
that could be considered.23

The White House in the U.S released a report 24 on the future of artificial
intelligence which offers several recommendations on how to regulate this
technology. It mentions how the data used to train artificial intelligence may
influence what it learns, and how it responds, hence, federal agencies should
prioritise the creation of open training data and open data standards in artificial
intelligence. A potential step in this area could be to include an “Open Data for
AI” with the objective of releasing a significant number of government datasets,
in addition to the publicly available datasets by some of the Tech Giants like
Techmaster-125 by Google, self-driving datasets by Lyft and Waymo26, among
others, to accelerate artificial intelligence research and to encourage the use of
open data standards and best practices across government, academia, and
private sector. Secondly, government agencies should draw an appropriate
technical expertise when setting regulatory policy for artificially intelligent
systems. The nature of artificial intelligence technique is very complex and

23Michael Guihot, Anne Matthew, and Nicolas Suzor, Nudging Robots: Innovative Solutions to Regulate Artificial
Intelligence, (2017).
24 Artificial Intelligence, Automation, and the Economy, Executive Office of the President, 2016, available at:,
last accessed on Oct 10, 2019.
25Taskmaster-1: Toward a Realistic and Diverse Dialog Dataset, 2019, available at
/pub48484/, last accessed Oct 10, 2019.
26After Lyft, Waymo Open Sources Self-Driving Dataset To The Public, accessible at
waymo-open-sources-self-driving-dataset-to-the-public/, last accessed Feb 20, 2020.

therefore, domain expertise will be crucial for informing legislators of the scope
and capabilities of artificial intelligence.

Click to View FlipBook Version