Machine Intelligence Research Institute

From Wikipedia, the free encyclopedia
Jump to navigation Jump to search
Machine Intelligence Research Institute
MIRI logo.png
Formation2000; 19 years ago (2000)
TypeNonprofit research institute
PurposeResearch into friendly artificial intelligence
Location
Key people
Eliezer Yudkowsky
Websiteintelligence.org

The Machine Intelligence Research Institute (MIRI), formerly the Singularity Institute for Artificial Intelligence (SIAI), is a non-profit organization founded in 2000 by Eliezer Yudkowsky, originally to accelerate the development of artificial intelligence, but focused since 2005 on identifying and managing the potential risks to humanity that future AI systems could become superintelligent. MIRI's work has focused on a friendly AI approach to system design and on predicting the rate of technology development.

History[edit]

In 2000, Eliezer Yudkowsky, who was mostly self-educated and had been involved in the Extropian group, founded the Singularity Institute for Artificial Intelligence with funding from Brian and Sabine Atkins; the original purpose of the institute was to accelerate the development of artificial intelligence (AI).[1][2][3] Yudkowsky began to be concerned that AI systems developed in the future could become superintelligent and pose risks to humanity,[1] and in 2005 the institute moved to Silicon Valley and began to focus on ways to identify and manage those risks; this was at a time when the scientists in the field were largely unconcerned with them but a group known as transhumanists were expressing concerns.[2]

Starting in 2006, the Institute organized the Singularity Summit to discuss the future of AI including its risks, initially in cooperation with Stanford University and funding from Peter Thiel; the San Francisco Chronicle described the first conference as a "Bay Area coming-out party for the tech-inspired philosophy called transhumanism".[4][5] In 2011, its offices were four apartments in downtown Berkeley.[6] In December 2012, the institute sold its name, web domain, and the Singularity Summit to Singularity University,[7] and the next month took the name "Machine Intelligence Research Institute".[8]

In 2014 and 2015 public and scientific interest in the risks of AI grew; this shift from something once considered "crackpot" to the mainstream spurred further donations to fund research at MIRI and similar organizations.[3][9]:327

Research and approach[edit]

Nate Soares presenting an overview of the AI alignment problem at Google.

MIRI's approach to identify and manage the risks of AI, led by Yudkowsky, is mostly about how to design friendly AI - both initial design of AI systems and mechanisms to ensure that evolving AI systems remain friendly.[3][10][11]

MIRI researchers advocate early safety work as a precautionary measure, before it is too late.[12] However MIRI researchers have expressed skepticism about the views of singularity advocates like Ray Kurzweil that superintelligence is "just around the corner".[10] MIRI has funded forecasting work through an initiative called AI Impacts, which studies historical instances of discontinuous technological change, and has developed new measures of the relative computational power of humans and computer hardware.[13]

Works by MIRI staff[edit]

  • Graves, Matthew (8 November 2017). "Why We Should Be Concerned About Artificial Superintelligence". Skeptic. The Skeptics Society. Retrieved 28 July 2018.
  • LaVictoire, Patrick; Fallenstein, Benja; Yudkowsky, Eliezer; Bárász, Mihály; Christiano, Paul; Herreshoff, Marcello (2014). "Program Equilibrium in the Prisoner's Dilemma via Löb's Theorem". Multiagent Interaction without Prior Coordination: Papers from the AAAI-14 Workshop. AAAI Publications.
  • Soares, Nate; Levinstein, Benjamin A. (2017). "Cheating Death in Damascus" (PDF). Formal Epistemology Workshop (FEW). Retrieved 28 July 2018.
  • Soares, Nate; Fallenstein, Benja; Yudkowsky, Eliezer; Armstrong, Stuart (2015). "Corrigibility". AAAI Workshops: Workshops at the Twenty-Ninth AAAI Conference on Artificial Intelligence, Austin, TX, January 25–26, 2015. AAAI Publications.
  • Soares, Nate; Fallenstein, Benja (2015). "Aligning Superintelligence with Human Interests: A Technical Research Agenda" (PDF). In Miller, James; Yampolskiy, Roman; Armstrong, Stuart; et al. The Technological Singularity: Managing the Journey. Springer.
  • Yudkowsky, Eliezer (2008). "Artificial Intelligence as a Positive and Negative Factor in Global Risk" (PDF). In Bostrom, Nick; Ćirković, Milan. Global Catastrophic Risks. Oxford University Press. ISBN 978-0199606504.
  • Taylor, Jessica (2016). "Quantilizers: A Safer Alternative to Maximizers for Limited Optimization". Workshops at the Thirtieth AAAI Conference on Artificial Intelligence.
  • Yudkowsky, Eliezer (2011). "Complex Value Systems in Friendly AI" (PDF). Artificial General Intelligence: 4th International Conference, AGI 2011, Mountain View, CA, USA, August 3–6, 2011. Berlin: Springer.

See also[edit]

References[edit]

  1. ^ a b "MIRI: Artificial Intelligence: The Danger of Good Intentions - Future of Life Institute". Future of Life Institute. 11 October 2015.
  2. ^ a b Khatchadourian, Raffi. "The Doomsday Invention". The New Yorker.
  3. ^ a b c Waters, Richard (31 October 2014). "Artificial intelligence: machine v man". Financial Times. Retrieved 27 August 2018.
  4. ^ Abate, Tom (2006). "Smarter than thou?". San Francisco Chronicle. Retrieved 12 October 2015.
  5. ^ Abate, Tom (2007). "Public meeting will re-examine future of artificial intelligence". San Francisco Chronicle. Retrieved 12 October 2015.
  6. ^ Kaste, Martin (January 11, 2011). "The Singularity: Humanity's Last Invention?". All Things Considered, NPR.
  7. ^ "Press release: Singularity University Acquires the Singularity Summitt". Singularity University. 9 December 2012.
  8. ^ "Press release: We are now the "Machine Intelligence Research Institute" (MIRI) - Machine Intelligence Research Institute". Machine Intelligence Research Institute. 30 January 2013.
  9. ^ Tegmark, Max (2017). Life 3.0: Being Human in the Age of Artificial Intelligence. United States: Knopf. ISBN 978-1-101-94659-6.
  10. ^ a b LaFrance, Adrienne (2015). "Building Robots With Better Morals Than Humans". The Atlantic. Retrieved 12 October 2015.
  11. ^ Russell, Stuart; Norvig, Peter (2009). Artificial Intelligence: A Modern Approach. Prentice Hall. ISBN 978-0-13-604259-4.
  12. ^ Sathian, Sanjena. "The Most Important Philosophers of Our Time Reside in Silicon Valley". OZY. OZY. Retrieved 28 July 2018.
  13. ^ Hsu, Jeremy (2015). "Making Sure AI's Rapid Rise Is No Surprise". Discover. Retrieved 12 October 2015.

Further reading[edit]

External links[edit]