Centre for the Study of Existential Risk

From Wikipedia, the free encyclopedia
Jump to navigation Jump to search
Centre for the Study of Existential Risk
Centre for the Study of Existential Risk.svg
Formation2012; 7 years ago (2012)
Founders
PurposeHigher Education and Research
HeadquartersCambridge, England
Parent organization
University of Cambridge
Websitecser.ac.uk
British cosmologist and astrophysicist Martin Rees, former President of the Royal Society and co-founder of the CSER

The Centre for the Study of Existential Risk (CSER) is a research centre at the University of Cambridge, intended to study possible extinction-level threats posed by present or future technology.[1] The co-founders of the centre are Huw Price (a philosophy professor at Cambridge), Martin Rees (a cosmologist, astrophysicist, and former President of the Royal Society) and Jaan Tallinn (a computer programmer and co-founder of Skype).[2] CSER's advisors include philosopher Peter Singer, computer scientist Stuart J. Russell, statistician David Spiegelhalter, and cosmologist Max Tegmark.[3] Their "goal is to steer a small fraction of Cambridge’s great intellectual resources, and of the reputation built on its past and present scientific pre-eminence, to the task of ensuring that our own species has a long-term future."[3][4]

Areas of focus[edit]

The centre's founding was announced in November, 2012.[5] Its name stems from Oxford philosopher Nick Bostrom's concept of existential risk, or risk "where an adverse outcome would either annihilate Earth-originating intelligent life or permanently and drastically curtail its potential".[6] This includes technologies that might permanently deplete humanity's resources or block further scientific progress, in addition to ones that put the species itself at risk.

Among the global catastrophic risks to be studied by CSER are those stemming from possible future advances in artificial intelligence. The potential dangers of artificial general intelligence have been highlighted in early discussions of CSER, being likened in some press coverage to that of a robot uprising à la The Terminator.[7][8] Price explained to the AFP news agency, "It seems a reasonable prediction that some time in this or the next century intelligence will escape from the constraints of biology". He added that when this happens "we're no longer the smartest things around," and will risk being at the mercy of "machines that are not malicious, but machines whose interests don't include us".[9] Price has also mentioned synthetic biology as being dangerous because "[as a result of] new innovations, the steps necessary to produce a weaponized virus or other bioterror agent have been dramatically simplified" and that consequently “the number of individuals needed to wipe us all out is declining quite steeply.”[10][11]

Other technologies CSER seeks to evaluate include molecular nanotechnology,[12] extreme climate change,[13] and systemic risks from fragile networks.[14][15]

Media coverage[edit]

CSER has been covered in many different newspapers (particularly in the United Kingdom),[4][7][8][9][10][14][16] mostly covering different topics of interest. University of Cambridge Research News' coverage of it focuses on risks from artificial general intelligence.[17] The CSER was profiled in the special Frankenstein issue of Science in 2018. [18] On existential risks to humanity, many scientists think the only serious one is the risk of global nuclear war. The Science article quoted Joyce Tait of the Imogen Institute in Edinburgh: "There is nothing [else] on the horizon." [18]

See also[edit]

References[edit]

  1. ^ Biba, Erin (1 June 2015). "Meet the Co-Founder of an Apocalypse Think Tank". Scientific American. Retrieved 2 July 2016.
  2. ^ Lewsey, Fred (25 November 2012). "Humanity's last invention and our uncertain future". Research News. Retrieved 24 December 2012.
  3. ^ a b "About CSER". Centre for the Study of Existential Risk. Archived from the original on 2015-12-12. Retrieved 2014-05-02.
  4. ^ a b Connor, Steve (14 September 2013). "Can We Survive?". The New Zealand Herald.
  5. ^ AP News, Cambridge to study technology's risk to humans, 25 November 2012.
  6. ^ Bostrom, Nick (2002). "Existential Risks: Analyzing Human Extinction Scenarios" (PDF). Journal of Evolution and Technology. 9 (1). Retrieved 27 March 2014.
  7. ^ a b Gaskell, Adi (27 November 2012). "Risk of a Terminator Style Robot Uprising to be Studied". Technorati. Archived from the original on 30 November 2012. Retrieved 2 December 2012.
  8. ^ a b Naughton, John (2 December 2012). "Could robots soon add to mankind's existential threats?". The Observer. Retrieved 24 December 2012.
  9. ^ a b Hui, Sylvia (25 November 2012). "Cambridge to study technology's risks to humans". Associated Press. Retrieved 30 January 2012.
  10. ^ a b Paramaguru, Kharunya (29 November 2012). "Rise of the machines: Cambridge University to study technology's 'existential risk' to mankind". Time. Retrieved 2 May 2014.
  11. ^ "Biological and Biotechnological Risks". Retrieved 29 May 2015.
  12. ^ "Molecular nanotechnology". Centre for the Study of Existential Risk. Archived from the original on 2014-05-03. Retrieved 4 May 2014.
  13. ^ "Extreme Climate Change". Retrieved 29 May 2015.
  14. ^ a b Osborne, Hannah (13 September 2013). "Doomsday list for apocalypse: bioterrorism, cyber-attacks and hostile computers threaten mankind". International Business Times. Retrieved 2 May 2014.
  15. ^ "Systemic risks and fragile networks". Centre for the Study of Existential Risk. Retrieved 2 May 2014.
  16. ^ "CSER media coverage". Centre for the Study of Existential Risk. Archived from the original on 30 June 2014. Retrieved 19 June 2014.
  17. ^ "Humanity's Last Invention and Our Uncertain Future". University of Cambridge Research News.
  18. ^ a b Kupferschmidt, Kai (2018-01-12). "Taming the monsters of tomorrow". Science. doi:10.1126/science.359.6372.152.

External links[edit]