from DSSResources.com


What policies should govern Artificial Intelligence (AI) applications?

by Daniel J. Power
Editor, DSSResources.COM

Artificial Intelligence (AI) applications are reaching the point where both organizations and governments must decide about regulating use and creating policies about AI. Managers and politicians should debate what is permitted and what is encouraged in the development and use of Artificial Intelligence. There are many outstanding issues. For example, who is liable if an AI application decision or recommendation in wrong? How should AI applications be validated? On May 8, 2018 at the I/O 2018 Developer Conference, Google announced its new virtual assistant called Duplex, which uses Artificial Intelligence, can make phone calls for you, and Duplex has a voice that sounds real. This announcement sparked a "harsh backlash". By May 12, 2018, Google had changed its policy and announced its AI assistant would identify itself as a robot during phone calls. Artificial Intelligence can augment, supplement, and support Human intelligence.

Recently, the Singapore Government Info-communications Media Development Authority announced establishment of an Advisory Council on the Ethical Use of AI and Data. "The Advisory Council will assist the Government to develop ethics standards and reference governance frameworks and publish advisory guidelines, practical guidance, and/or codes of practice for the voluntary adoption by the industry." The current discussion paper "recommends two key principles: 1) Decisions made by or with the assistance of AI should be explainable, transparent and fair to consumers; and 2) AI systems, robots and decisions should be human-centric," cf., IMDA Press Release (2018).

The Governance of AI Program based at the University of Oxford’s Future of Humanity Institute strives to steer the development of artificial intelligence for the common good using research and policy engagement. The program's "focus is on the challenges arising from transformative AI: advanced AI systems whose long-term impacts may be as profound as the industrial revolution", cf., https://www.fhi.ox.ac.uk/governance-ai-program.

The Malicious Use of Artificial Intelligence: Forecasting, Prevention, and Mitigation report (Brundage and Avin, 2018) notes "The use of AI to automate tasks involved in surveillance (e.g. analysing mass-collected data), persuasion (e.g. creating targeted propaganda), and deception (e.g. manipulating videos) may expand threats associated with privacy invasion and social manipulation. We also expect novel attacks that take advantage of an improved capacity to analyse human behaviors, moods, and beliefs on the basis of available data. These concerns are most significant in the context of authoritarian states, but may also undermine the ability of democracies to sustain truthful public debates. (p. 6)"

USA Today author Brett Molina (2018) reported on an MIT Research project that created a psychopath Artificial Intelligence named Norman. Molina writes "So, why would MIT create a psycho AI? It's all about algorithms, and when things might go awry it's not as simple as blaming the machine. 'The data that is used to teach a machine learning algorithm can significantly influence its behavior,' reads a statement on their website. "So when people talk about AI algorithms being biased and unfair, the culprit is often not the algorithm itself, but the biased data that was fed to it." See http://norman-ai.mit.edu/.

U.S Chief Justice John Roberts spoke to high school graduates at a June 2018 commencement ceremony, his message "Beware the robots". Wolf (2018) reports "Roberts warned that artificial intelligence and big data can alter the way people perceive the world". ... "The result," Roberts said, "can be a narrowing and over-simplification that is contrary to individuality and creativity." Wolf noted Roberts said "I worry that we will start thinking like machines".

Sample (2017) reports "The rise of artificial intelligence (AI) has led to an explosion in the number of algorithms that are used by employers, banks, police forces and others, but the systems can, and do, make bad decisions that seriously impact people’s lives. But because technology companies are so secretive about how their algorithms work – to prevent other firms from copying them – they rarely disclose any detailed information about how AIs have made particular decisions." He notes "Sandra Wachter, Brent Mittelstadt, and Luciano Floridi, a research team at the Alan Turing Institute in London and the University of Oxford, call for a trusted third party body that can investigate AI decisions for people who believe they have been discriminated against."

The American HBO science fiction western titled Westworld (2016) explores the coupling of Artificial Intelligence with human-appearing androids. The plots are thought provoking. In general, the problems associated with intelligent human-appearing androids seem to outweigh any benefits (as least in fiction). Other AI fiction include the Terminator (1984) where AI are both hero and villain. Blade Runner (1982) another android plot that has twisted, AI control issues. C-3PO and R2-D2 of Star Wars are the the best known, and cutest Artificial Intelligence bots. Hal of 2001: A Space Odyssey (1968) is the most omnipresent AI and the most troubling. Reality and pseudo-reality create confusion for people. At a future time, AI might stand for our Artificial Idiocy or an Alien Invasion.

There are potential problems and ethical issues with adoption and use of Artificial Intelligence. Metz (2018) reports "A.I. systems also exhibit strange and unexpected behavior because the way they learn from large amounts of data is not entirely understood. That makes them vulnerable to manipulation; today’s computer vision algorithms, for example, can be fooled into seeing things that are not there." Criminals may exploit AI for crime (Bajarin, 2016). AI may reduce the number of jobs for people (Bajarin, 2016; Bossmann, 2016). Artificial stupidity (Bossmann, 2016).

People should be proactive in governing and regulating use of AI in society, organizations, homes and in devices. Using AI is a large scale experiment. We, as scientists, have an obligation to insure that people are not harmed by AI. all of us are becoming subjects in uncontrolled AI experimentation.

Some Suggested Policies

1. The owner of an AI application can not limit her/his/its liability related to use of the AI in any way, even with a disclaimer limiting liability.

2. If an expert is using an AI application for support, then both the expert and the owner of the AI share liability.

3. AI applications that replace human employees should be discouraged, except when the task is dangerous or creates other harms for a human.

4. AI applications should be designed to support humans and enhance the quality of human life.

5. Development of autonomous, self-replicating AI robots should be discouraged.

6. AI applications should only make autonomous decisions in routine, recurring decision situations. Even in those situations, knowledgeable humans should regularly monitor the decisions and consequences to insure the AI application is performing satisfactorily.

7. It should always be disclosed when an AI application is making a decision and the reasoning behind the decision should be understood and transparent to anyone impacted by the decision.

8. Each AI application should be tested to insure it is fair, accountable, and transparent.

9. AI application code should be restricted and well-secured. Passwords and other security measures related to an AI application should be routinely checked for vulnerabilities.

10. AI and machine learning are critical technologies and export of source code should be restricted.

11. A trusted third party body, perhaps at the national or international level, should be established that can investigate AI decisions when a charge is brought by a person or group who believe they have been discriminated against.

Calo (2017), in his excellent primer on AI and policy issues, concluded optimistically "AI has managed to capture policymakers’ imaginations early enough in its life-cycle that there is hope we can yet channel it toward the public interest." Capturing the imaginations of policy makers must be translated into appropriate policies. That task is only just beginning.

References

Author Unknown, "Benefits & Risks of Artificial Intelligence," Future of Life Institute, N.D. at URL https://futureoflife.org/background/benefits-risks-of-artificial-intelligence/

Bajarin, T., "These Are My 2 Biggest Fears About Artificial Intelligence," Time, November 14, 2016 at URL http://time.com/4569585/ai-robots-fears/

Bossmann, J., "Top 9 ethical issues in artificial intelligence," World Economic Forum, Oct. 21, 2016, at URL https://www.weforum.org/agenda/2016/10/top-10-ethical-issues-in-artificial-intelligence/

Brundage, M. and J. Bryson, "Smart Policies for Artificial Intelligence," working paper, N.D. at URL https://arxiv.org/ftp/arxiv/papers/1608/1608.08196.pdf

Brundage, M. and S. Avin et al, "The Malicious Use of Artificial Intelligence: Forecasting, Prevention, and Mitigation," Future of Humanity Institute, February 2018, based upon M. Brundage and Shahar Avin, co-chaired workshop entitled “Bad Actor Risks in Artificial Intelligence” in Oxford, United Kingdom, February 19 and 20, 2017 at URL https://maliciousaireport.com/.

Calo, R., "Artificial Intelligence Policy: A Primer and Roadmap," UC-Davis Law Review, Vol. 51:399, 2017, pp. 399-435 at URL https://lawreview.law.ucdavis.edu/issues/51/2/Symposium/51-2_Calo.pdf

Escher, A. and M. Lynley, "8 big announcements from Google I/O 2018," TechCrunch, May 8, 2018 at URL https://techcrunch.com/2018/05/08/8-big-announcements-from-google-i-o-2018/.

IMDA, "Singapore implements Artificial Intelligence governance and ethics initiatives," Infocomm Media Development Authority Press Release, June 5, 2018 at URL http://dssresources.com/news/4964.php

Malliaraki, E., "Toward ethical, transparent and fair AI/ML: a critical reading list," Medium, February 9, 2018 at URL https://medium.com/@eirinimalliaraki/toward-ethical-transparent-and-fair-ai-ml-a-critical-reading-list-d950e70a70ea

Metz, C., "Good News: A.I. Is Getting Cheaper. That’s Also Bad News," The New York Times, Feb. 20, 2018 at URL https://www.nytimes.com/2018/02/20/technology/artificial-intelligence-risks.html.

Molina, B., "Terrifying: an artificial intelligence was fed Reddit captions. Now it's a 'psychopath'," USA Today, June 7, 2018 at URL https://www.usatoday.com/story/tech/nation-now/2018/06/07/artificial-intelligence-fed-reddit-captions-became-psychopath/681888002/

Sample, I., "AI watchdog needed to regulate automated decision-making, say experts," The Guardian, January 17, 2017 at URL https://www.theguardian.com/technology/2017/jan/27/ai-artificial-intelligence-watchdog-needed-to-prevent-discriminatory-automated-decisions.

Welch, C., "It’s hard to believe AI can interact with people this naturally," The Verge, May 8, 2018 at URL https://www.theverge.com/2018/5/8/17332070/google-assistant-makes-phone-call-demo-duplex-io-2018

Wolf, R., "Chief Justice John Roberts to high school graduates (and his daughter): 'Beware the robots'," USA Today, June 7, 2018 at URL https://www.usatoday.com/story/news/politics/2018/06/07/beware-robots-chief-justice-john-roberts-commencement-warning/681626002/

*********************

Part of Norman’s introduction to the public was staged as an April Fool’s prank on the lab’s official site.(http://www.the13thfloor.tv/2018/04/09/is-norman-the-first-a-i-psychopath-his-creators-say-so/)

From http://norman-ai.mit.edu/
APRIL 1, 2018
AI-Powered Psychopath
We present you Norman, world's first psychopath AI. Norman is born from the fact that the data that is used to teach a machine learning algorithm can significantly influence its behavior. So when people talk about AI algorithms being biased and unfair, the culprit is often not the algorithm itself, but the biased data that was fed to it. The same method can see very different things in an image, even sick things, if trained on the wrong (or, the right!) data set. Norman suffered from extended exposure to the darkest corners of Reddit, and represents a case study on the dangers of Artificial Intelligence gone wrong when biased data is used in machine learning algorithms.

Norman is an AI that is trained to perform image captioning; a popular deep learning method of generating a textual description of an image. We trained Norman on image captions from an infamous subreddit (the name is redacted due to its graphic content) that is dedicated to document and observe the disturbing reality of death. Then, we compared Norman's responses with a standard image captioning neural network (trained on MSCOCO dataset) on Rorschach inkblots; a test that is used to detect underlying thought disorders.

Note: Due to the ethical concerns, we only introduced bias in terms of image captions from the subreddit which are later matched with randomly generated inkblots (therefore, no image of a real person dying was utilized in this experiment).

Last update: 2018-06-18 03:43
Author: Daniel Power

Print this record Print this record
Show this as PDF file Show this as PDF file

Please rate this entry:

Average rating: 0 from 5 (0 Votes )

completely useless 1 2 3 4 5 most valuable

You cannot comment on this entry





DSS Home |  About Us |  Contact Us |  Site Index |  Subscribe | What's New
Please Tell Your Friends about DSSResources.COMCopyright © 1995-2015 by D. J. Power (see his home page).
DSSResources.COMsm is maintained by Daniel J. Power. Please contact him at djpower1950@gmail.com with questions. See disclaimer and privacy statement.


Google
 
Web DSSResources.com

powered by phpMyFAQ 1.5.3