Machine ethics
It has been suggested that Robot ethics be merged into this article. (Discuss) Proposed since May 2018. |
This article has multiple issues. Please help improve it or discuss these issues on the talk page. (Learn how and when to remove these template messages)
(Learn how and when to remove this template message)
|
Machine ethics (or machine morality, computational morality, or computational ethics) is a part of the ethics of artificial intelligence concerned with the moral behavior of artificially intelligent beings.[1] Machine ethics contrasts with roboethics, which is concerned with the moral behavior of humans as they design, construct, use and treat such beings. Machine ethics should not be confused with computer ethics, which focuses on professional behavior towards computers and information. It should also be distinguished from the philosophy of technology, a field that is predominantly concerned with the ethical standing of humans who use technological products, given that machine ethics regards artificially intelligent machines as actual or potential moral agents.[2]
Contents
History[edit]
Before the 21st century the ethics of machines had largely been the subject of science fiction literature, mainly due to computing and artificial intelligence (AI) limitations. These limitations are being overcome through advances in theory and hardware resulting in a renewed focus on the field of artificial intelligence making Machine Ethics a bona fide field of research. The first use of the term seems to be by Mitchell Waldrop in the 1987 AI Magazine article "A Question of Responsibility":
"However, one thing that is apparent from the above discussion is that intelligent machines will embody values, assumptions, and purposes, whether their programmers consciously intend them to or not. Thus, as computers and robots become more and more intelligent, it becomes imperative that we think carefully and explicitly about what those built-in values are. Perhaps what we need is, in fact, a theory and practice of machine ethics, in the spirit of Asimov’s three laws of robotics."[3]
The field was delineated in the AAAI Fall 2005 Symposium on Machine Ethics:
"Past research concerning the relationship between technology and ethics has largely focused on responsible and irresponsible use of technology by human beings, with a few people being interested in how human beings ought to treat machines. In all cases, only human beings have engaged in ethical reasoning. The time has come for adding an ethical dimension to at least some machines. Recognition of the ethical ramifications of behavior involving machines, as well as recent and potential developments in machine autonomy, necessitate this. In contrast to computer hacking, software property issues, privacy issues and other topics normally ascribed to computer ethics, machine ethics is concerned with the behavior of machines towards human users and other machines. Research in machine ethics is key to alleviating concerns with autonomous systems—it could be argued that the notion of autonomous machines without such a dimension is at the root of all fear concerning machine intelligence. Further, investigation of machine ethics could enable the discovery of problems with current ethical theories, advancing our thinking about Ethics."[4]
Machine ethics is sometimes referred to as machine morality, computational ethics or computational morality. A variety of perspectives of this nascent field can be found in the collected edition "Machine Ethics"[5] that stems from the AAAI Fall 2005 Symposium on Machine Ethics.
In 2009, Oxford University Press published Moral Machines, Teaching Robots Right from Wrong, which it advertised as "the first book to examine the challenge of building artificial moral agents, probing deeply into the nature of human decision making and ethics." It cited some 450 sources, about 100 of which addressed major questions of machine ethics. Few sources were written before the 21st century largely because any form of artificial intelligence was nonexistent.[6]
In 2011, Cambridge University Press published a collection of essays about machine ethics edited by Michael and Susan Leigh Anderson,[5] who also edited a special issue of IEEE Intelligent Systems on the topic in 2006.[7]
In 2014, the US Office of Naval Research announced that it would distribute $7.5 million in grants over five years to university researchers to study questions of machine ethics as applied to autonomous robots,[8] and Nick Bostrom's Superintelligence: Paths, Dangers, Strategies, which raised machine ethics as the "most important...issue humanity has ever faced," reached #17 on the New York Times list of best selling science books.[9]
In 2016 the European Parliament published a paper,[10] (22-page PDF), to encourage the Commission to address the issue of robots' legal status, as described more briefly in the press.[11]
Definitions[edit]
James H. Moor lists four kinds of robots in relation to ethics. A machine can be more than one type of agent.[12]
- Ethical impact agents: machine systems carrying an ethical impact whether intended or not. Moor gives the example of a watch causing a worker to be on work on time. As well as Ethical impact agents there are Unethical impact agents. Certain agents can be unethical impact agents at certain times and ethical impact agents at other times. He gives the example of what he calls a 'Goodman agent', named after philosopher Nelson Goodman. The Goodman agent compares dates "this was generated by programming yearly dates using only the last two digits of the year, which resulted in dates beyond 2000 being misleadingly treated as earlier than those in the late twentieth century. Thus the Goodman agent was an ethical impact agent before 2000, and an unethical impact agent thereafter."
- Implicit ethical agents: machines constrained to avoid unethical outcomes.
- Explicit ethical agents: Machines which have algorithms to act ethically.
- Full ethical agents: Machines that are ethical in the same way humans are (i.e. have free will, consciousness and intentionality)
(See artificial systems and moral responsibility.)
Foci[edit]
Urgency[edit]
In 2009, academics and technical experts attended a conference to discuss the potential impact of robots and computers and the impact of the hypothetical possibility that they could become self-sufficient and able to make their own decisions. They discussed the possibility and the extent to which computers and robots might be able to acquire any level of autonomy, and to what degree they could use such abilities to possibly pose any threat or hazard. They noted that some machines have acquired various forms of semi-autonomy, including being able to find power sources on their own and being able to independently choose targets to attack with weapons. They also noted that some computer viruses can evade elimination and have achieved "cockroach intelligence." They noted that self-awareness as depicted in science-fiction is probably unlikely, but that there were other potential hazards and pitfalls.[13]
Some experts and academics have questioned the use of robots for military combat, especially when such robots are given some degree of autonomous functions.[14] The US Navy has funded a report which indicates that as military robots become more complex, there should be greater attention to implications of their ability to make autonomous decisions.[15][16] The President of the Association for the Advancement of Artificial Intelligence has commissioned a study to look at this issue.[17] They point to programs like the Language Acquisition Device which can emulate human interaction.
Algorithms[edit]
Nick Bostrom and Eliezer Yudkowsky have argued for decision trees (such as ID3) over neural networks and genetic algorithms on the grounds that decision trees obey modern social norms of transparency and predictability (e.g. stare decisis),[18] while Chris Santos-Lang argued in the opposite direction on the grounds that the norms of any age must be allowed to change and that natural failure to fully satisfy these particular norms has been essential in making humans less vulnerable than machines to criminal "hackers".[19][20]
Training and instruction[edit]
In 2009, during an experiment at the Laboratory of Intelligent Systems in the Ecole Polytechnique Fédérale of Lausanne in Switzerland, robots that were programmed to cooperate with each other in searching out a beneficial resource and avoiding a poisonous one eventually learned to lie to each other in an attempt to hoard the beneficial resource.[21]
One problem in this case may have been that the goals were "terminal" (i.e. in contrast, ultimate human motives typically have a quality of requiring never-ending learning). The robots were grouped into clans and the successful members' digital genetic code was used for the next generation. After 50 successive generations, one clan's members discovered how to indicate food instead of poison. At the same time there were selfless robots who signaled danger and died to save others.These adaptations could have been the result of the genetic code or their human creators.[19] (See also Genetic algorithm.)
Machine learning bias[edit]
As big data and machine learning algorithms that use big data have come into use across fields as diverse as online advertising, credit ratings, and criminal sentencing, all with the promise of providing more objective, data-driven results, they have been identified as a source for perpetuating social inequalities and discrimination.[22][23] For example, a 2015 study found that women were less likely to be shown high-income job ads by Google's AdSense and another study found that Amazon’s same-day delivery service was systematically not available in black neighborhoods, both for reasons that the companies could not explain, but were just the result of the black box methods they used.[22]
In 2016, the Obama Administration's Big Data Working Group released reports arguing “the potential of encoding discrimination in automated decisions” and calling for “equal opportunity by design” for applications such as credit scoring.[24][25]
In an effort to be more fair and to try to avoid adding to already high imprisonment rates in the US, courts across America have started using quantitative risk assessment software when trying to make decisions about releasing people on bail and sentencing; these tools analyze defendants' history and other attributes.[23] A 2016 ProPublica report analyzed recidivism risk scores calculated by one of the most commonly used tools, the Northpointe COMPAS system, and looked at outcomes over two years, and found that only 61% of those deemed high risk actually committed additional crimes during that period and that African-American defendants were far more likely to be given high scores than white defendants.[23]
Ethical implications[edit]
In Moral Machines: Teaching Robots Right from Wrong, Wendell Wallach and Colin Allen conclude that attempts to teach robots right from wrong will likely advance understanding of human ethics by motivating humans to address gaps in modern normative theory and by providing a platform for experimental investigation.[6]
Preventing Discriminatory Outcomes in Machine Learning[edit]
Considered an algorithmic bias, computing systems have been found to act upon the prejudices of the people who clean, aggregate, and structure their data inputs. Cases of such algorithmic bias have been reported everywhere from loan services to the criminal justice system. In a recent review of the Florida court system by ProPublica, it was found that COMPAS, a risk assessment software used by judges, was twice as likely to label people of color as at risk for recidivism[26]. In March of 2018, in an effort to address rising concerns over machine learning’s impact on human rights, the World Economic Forum and Global Future Council on Human Rights published a white paper with detailed recommendations on how best to prevent discriminatory outcomes in machine learning[27]. The World Economic Forum developed four recommendations based on the UN Guiding Principles of Human Rights to help address and prevent discriminatory outcomes in machine learning.
The World Economic Forum’s recommendations are as follows[27]:
- Active Inclusion: the development and design of machine learning applications must actively seek a diversity of input, especially of the norms and values of specific populations affected by the output of AI systems
- Fairness: People involved in conceptualizing, developing, and implementing machine learning systems should consider which definition of fairness best applies to their context and application, and prioritize it in the architecture of the machine learning system and its evaluation metrics
- Right to Understanding: Involvement of machine learning systems in decision-making that affects individual rights must be disclosed, and the systems must be able to provide an explanation of their decision-making that is understandable to end users and reviewable by a competent human authority. Where this is impossible and rights are at stake, leaders in the design, deployment, and regulation of machine learning technology must question whether or not is should be used
- Access to Redress: Leaders, designers, and developers of machine learning systems are responsible for identifying the potential negative human rights impacts of their systems. They must make visible avenues for redress for those affected by disparate impacts, and establish processes for the timely redress of any discriminatory outputs.
As the ubiquity of machine learning and artificial intelligence grows, it is critical for technology companies to take the necessary precautions to prevent discrimination and heed the advice of World Economic Forum Executive Chairman Klaus Schwab, remembering, “… new technologies are first and foremost tools made by people for people." [28]
Approaches[edit]
Several attempts have been made to make ethics computable or at least formal. Whereas Isaac Asimov's Three Laws of Robotics are usually not considered to be suitable for an artificial moral agent,[29] it has been studied whether Kant's categorical imperative can be used.[30] However, it has been pointed out that human value is in some aspects very complex.[31] A way to explicitly surmount this difficulty is to receive human values directly from them through some mechanism, for example by learning them.[32][33][34]
Another approach is to base current ethical considerations on previous similar situations. This is called casuistry and it could be implemented through research on the internet. The consensus from a million past decisions would lead to a new decision that is democracy dependent.[35] This could, however, lead to decisions that reflect biases and unethical behaviors exhibited in society. The negative effects of this approach can be seen in Microsoft's Tay (bot), where the chatterbot learned to repeat racist and sexually charged messages sent by Twitter users.[36]
One thought experiment focuses on a Genie Golem with unlimited powers presenting itself to the reader. This Genie declares that it will return in 50 years and demands that it be provided with a definite set of morals that it will then immediately act upon. The purpose of this experiment is to initiate a discourse over how best to handle defining complete set of ethics that computers may understand.[37]
In fiction[edit]
Isaac Asimov considered the issue in the 1950s in I, Robot. At the insistence of his editor John W. Campbell Jr., he proposed the Three Laws of Robotics to govern artificially intelligent systems. Much of his work was then spent testing the boundaries of his three laws to see where they would break down, or where they would create paradoxical or unanticipated behavior. His work suggests that no set of fixed laws can sufficiently anticipate all possible circumstances.[38]
Related fields[edit]
- Affective computing
- Formal ethics[39]
- Bioethics
- Computational theory of mind
- Computer ethics
- Ethics of artificial intelligence
- Moral psychology
- Philosophy of artificial intelligence
- Philosophy of mind
See also[edit]
- Artificial intelligence
- Automating medical decision-support
- Google car
- Military robot
- Machine Intelligence Research Institute
- Watson project for automating medical decision-support
- Tay (bot)
External links[edit]
- Machine Ethics, Interdisciplinary project on machine ethics.
Notes[edit]
- ^ Moor, James H. (July–August 2006). "The Nature, Importance and Difficulty of Machine Ethics". IEEE Intelligent Systems. 21 (4): 18–21. doi:10.1109/MIS.2006.80.
- ^ Boyles, Robert James M. (June 2018). "A Case for Machine Ethics in Modeling Human-Level Intelligent Agents". Kritike. 12 (1): 182–200. doi:10.25138/12.1.a9.
- ^ Waldrop, Mitchell (Spring 1987). "A Question of Responsibility". AI Magazine. 8 (1): 28–39. doi:10.1609/aimag.v8i1.572.
- ^ "Papers from the 2005 AAAI Fall Symposium". Archived from the original on 2014-11-29.
- ^ a b Anderson, Michael; Anderson, Susan Leigh, eds. (July 2011). Machine Ethics. Cambridge University Press. ISBN 978-0-521-11235-2.
- ^ a b Wallach, Wendell; Allen, Colin (November 2008). Moral Machines: Teaching Robots Right from Wrong. USA: Oxford University Press. ISBN 978-0-19-537404-9.
- ^ Anderson, Michael; Anderson, Susan Leigh, eds. (July–August 2006). "Special Issue on Machine Ethics". IEEE Intelligent Systems. 21 (4): 10–63. doi:10.1109/mis.2006.70. ISSN 1541-1672. Archived from the original on 2011-11-26.
- ^ Tucker, Patrick (13 May 2014). "Now The Military Is Going To Build Robots That Have Morals". Defense One. Retrieved 9 July 2014.
- ^ "Best Selling Science Books". New York Times. New York Times. September 8, 2014. Retrieved 9 November 2014.
- ^ "European Parliament, Committee on Legal Affairs. Draft Report with recommendations to the Commission on Civil Law Rules on Robotics". European Commission. Retrieved January 12, 2017.
- ^ Wakefield, Jane. "MEPs vote on robots' legal status – and if a kill switch is required". BBC. Retrieved 12 January 2017.
- ^ Four Kinds of Ethical Robots
- ^ Scientists Worry Machines May Outsmart Man By JOHN MARKOFF, NY Times, July 26, 2009.
- ^ Call for debate on killer robots, By Jason Palmer, Science and technology reporter, BBC News, 8/3/09.
- ^ Science New Navy-funded Report Warns of War Robots Going "Terminator", by Jason Mick (Blog), dailytech.com, February 17, 2009.
- ^ Navy report warns of robot uprising, suggests a strong moral compass, by Joseph L. Flatley engadget.com, Feb 18th 2009.
- ^ AAAI Presidential Panel on Long-Term AI Futures 2008–2009 Study, Association for the Advancement of Artificial Intelligence, Accessed 7/26/09.
- ^ Bostrom, Nick; Yudkowsky, Eliezer (2011). "The Ethics of Artificial Intelligence" (PDF). Cambridge Handbook of Artificial Intelligence. Cambridge Press.
- ^ a b Santos-Lang, Chris (2002). "Ethics for Artificial Intelligences". Archived from the original on 2011-12-03.
- ^ Santos-Lang, Christopher (2014). "Moral Ecology Approaches to Machine Ethics". In van Rysewyk, Simon; Pontier, Matthijs. Machine Medical Ethics (PDF). Switzerland: Springer. pp. 111–127. doi:10.1007/978-3-319-08108-3_8.
- ^ Evolving Robots Learn To Lie To Each Other, Popular Science, August 18, 2009
- ^ a b Crawford, Kate (25 June 2016). "Artificial Intelligence's White Guy Problem". The New York Times.
- ^ a b c Kirchner, Julia Angwin, Surya Mattu, Jeff Larson, Lauren (23 May 2016). "Machine Bias: There's Software Used Across the Country to Predict Future Criminals. And it's Biased Against Blacks". ProPublica.
- ^ Executive Office of the President (May 2016). "Big Data: A Report on Algorithmic Systems, Opportunity, and Civil Rights" (PDF). Obama White House.
- ^ "Big Risks, Big Opportunities: the Intersection of Big Data and Civil Rights". Obama White House. 4 May 2016.
- ^ Sears, Mark. "AI Bias And The 'People Factor' In AI Development". Forbes. Retrieved 2018-12-11.
- ^ a b "How to Prevent Discriminatory Outcomes in Machine Learning". World Economic Forum. Retrieved 2018-12-11.
- ^ Kochi, Erica. "AI is already learning how to discriminate". Quartz at Work. Retrieved 2018-12-11.
- ^ Anderson, Susan Leigh (2011): The Unacceptability of Asimov's Three Laws of Robotics as a Basis for Machine Ethics. In: Machine Ethics, ed. Michael Anderson, Susan Leigh Anderson. New York: Oxford University Press. pp.285–296. ISBN 9780511978036
- ^ Powers, Thomas M. (2011): Prospects for a Kantian Machine. In: Machine Ethics, ed. Michael Anderson, Susan Leigh Anderson. New York: Oxford University Press. pp.464–475.
- ^ Muehlhauser, Luke, Helm, Louie (2012): Intelligence Explosion and Machine Ethics.
- ^ Yudkowsky, Eliezer (2004): Coherent Extrapolated Volition.
- ^ Guarini, Marcello (2011): Computational Neural Modeling and the Philosophy of Ethics. Reflections on the Particularism-Generalism Debate. In: Machine Ethics, ed. Michael Anderson, Susan Leigh Anderson. New York: Oxford University Press. pp.316–334.
- ^ Hibbard, Bill (2014): Ethical Artificial Intelligence. https://arxiv.org/abs/1411.1373
- ^ Anderson, M. and Anderson, S. (2007). Creating an Ethical Intelligent Agent. AI Magazine, Volume 28(4).
- ^ "Microsoft chatbot is taught to swear on Twitter – BBC News". BBC News. Retrieved 2016-04-17.
- ^ Nazaretyan, A. (2014). A. H. Eden, J. H. Moor, J. H. Søraker and E. Steinhart (eds): Singularity Hypotheses: A Scientific and Philosophical Assessment. Minds & Machines, 24(2), pp.245–248.
- ^ Asimov, Isaac (2008). I, robot. New York: Bantam. ISBN 0-553-38256-X.
- ^ Ganascia, Jean-Gabriel. "Ethical system formalization using non-monotonic logics." Proceedings of the Annual Meeting of the Cognitive Science Society. Vol. 29. No. 29. 2007.
References[edit]
- Wallach, Wendell; Allen, Colin (November 2008). Moral Machines: Teaching Robots Right from Wrong. USA: Oxford University Press.
- Anderson, Michael; Anderson, Susan Leigh, eds (July 2011). Machine Ethics. Cambridge University Press.
- Storrs Hall, J. (May 30, 2007). Beyond AI: Creating the Conscience of the Machine Prometheus Books.
- Moor, J. (2006). The Nature, Importance, and Difficulty of Machine Ethics. IEEE Intelligent Systems, 21(4), pp. 18–21.
- Anderson, M. and Anderson, S. (2007). Creating an Ethical Intelligent Agent. AI Magazine, Volume 28(4).
Further reading[edit]
- Anderson, Michael; Anderson, Susan Leigh, eds (July/August 2006). "Special Issue on Machine Ethics". IEEE Intelligent Systems 21 (4): 10–63.
- Bendel, Oliver (December 11, 2013). Considerations about the Relationship between Animal and Machine Ethics. AI & SOCIETY, DOI 10.1007/s00146-013-0526-3.
- Dabringer, Gerhard, ed. (2010). "Ethical and Legal Aspects of Unmanned Systems. Interviews". Austrian Ministry of Defence and Sports, Vienna 2010, ISBN 978-3-902761-04-0.
- Gardner, A. (1987). An Artificial Approach to Legal Reasoning. Cambridge, MA: MIT Press.
- Georges, T. M. (2003). Digital Soul: Intelligent Machines and Human Values. Cambridge, MA: Westview Press.
- Singer, P.W. (December 29, 2009). Wired for War: The Robotics Revolution and Conflict in the 21st Century: Penguin.