Geoffrey Everest Hinton
6 December 1947
|Institutions||University of Toronto|
Carnegie Mellon University
University College London
|Thesis||Relaxation and its role in vision (1977)|
|Doctoral advisor||Christopher Longuet-Higgins|
|Other notable students|
Geoffrey Everest Hinton CC FRS FRSC (born 6 December 1947) is an English Canadian cognitive psychologist and computer scientist, most noted for his work on artificial neural networks. Since 2013 he divides his time working for Google (Google Brain) and the University of Toronto.
With David E. Rumelhart and Ronald J. Williams, Hinton was co-author of a highly cited paper that applied the backpropagation algorithm (developed by Seppo Linnainmaa, 1970) to multi-layer neural networks, but did not cite the inventor of the method. He is viewed by some as a leading figure in the deep learning community and is referred to by some as the "Godfather of Deep Learning". The dramatic image-recognition milestone of the AlexNet designed by his student Alex Krizhevsky for the Imagenet challenge 2012 helped to revolutionize the field of computer vision.
Hinton was educated at King's College, Cambridge graduating in 1970, with a Bachelor of Arts in experimental psychology. He continued his study at the University of Edinburgh where he was awarded a PhD in artificial intelligence in 1978 for research supervised by Christopher Longuet-Higgins.
Career and research
After his PhD he worked at the University of Sussex, and (after difficulty finding funding in Britain) the University of California, San Diego, and Carnegie Mellon University. He was the founding director of the Gatsby Charitable Foundation Computational Neuroscience Unit at University College London, and is currently[update] a professor in the computer science department at the University of Toronto. He holds a Canada Research Chair in Machine Learning, and is currently an advisor for the Learning in Machines & Brains program at the Canadian Institute for Advanced Research. Hinton taught a free online course on Neural Networks on the education platform Coursera in 2012. Hinton joined Google in March 2013 when his company, DNNresearch Inc., was acquired. He is planning to "divide his time between his university research and his work at Google".
Hinton's research investigates ways of using neural networks for machine learning, memory, perception and symbol processing. He has authored or co-authored over 200 peer reviewed publications in these areas.
While Hinton was a professor at Carnegie Mellon University (1982–1987), David E. Rumelhart and Hinton and Ronald J. Williams applied the back-propagation algorithm (also known as the reverse mode of automatic differentiation published by Seppo Linnainmaa in 1970) to multi-layer neural networks. Their experiments showed that such networks can learn useful internal representations of data. Their paper, however, does not cite the inventor of the method.
During the same period, Hinton co-invented Boltzmann machines with David Ackley and Terry Sejnowski. His other contributions to neural network research include distributed representations, time delay neural network, mixtures of experts, Helmholtz machines and Product of Experts. In 2007 Hinton coauthored an unsupervised learning paper titled Unsupervised learning of image transformations. An accessible introduction to Geoffrey Hinton's research can be found in his articles in Scientific American in September 1992 and October 1993.
In October and November 2017 respectively, Hinton published two open access research papers on the theme of capsule neural networks, which according to Hinton are "finally something that works well."
Notable former PhD students and postdoctoral researchers from his group include Richard Zemel, Brendan Frey, Radford M. Neal, Ruslan Salakhutdinov, Ilya Sutskever, Yann LeCun and Zoubin Ghahramani.
Honours and awards
|“||Geoffrey E. Hinton is internationally distinguished for his work on artificial neural nets, especially how they can be designed to learn without the aid of a human teacher. This may well be the start of autonomous intelligent brain-like machines. He has compared effects of brain damage with effects of losses in such a net, and found striking similarities with human impairment, such as for recognition of names and losses of categorization. His work includes studies of mental imagery, and inventing puzzles for testing originality and creative intelligence. It is conceptual, mathematically sophisticated and experimental. He brings these skills together with striking effect to produce important work of great interest.||”|
In 2001, Hinton was awarded an Honorary Doctorate from the University of Edinburgh. He was the 2005 recipient of the IJCAI Award for Research Excellence lifetime-achievement award. He has also been awarded the 2011 Herzberg Canada Gold Medal for Science and Engineering. In 2013, Hinton was awarded an Honorary Doctorate from the Université de Sherbrooke.
In 2016, he was elected a foreign member of National Academy of Engineering "For contributions to the theory and practice of artificial neural networks and their application to speech recognition and computer vision". He also received the 2016 IEEE/RSE Wolfson James Clerk Maxwell Award.
He has won the BBVA Foundation Frontiers of Knowledge Award (2016) in the Information and Communication Technologies category "for his pioneering and highly influential work" to endow machines with the ability to learn.
Hinton is the great-great-grandson both of logician George Boole whose work eventually became one of the foundations of modern computer science, and of surgeon and author James Hinton. His father is Howard Hinton. His middle name is from another relative, George Everest. He is the nephew of the economist Colin Clark. He lost his first wife to ovarian cancer in 1994.
Hinton moved from the U.S. to Canada in part due to disillusionment with Ronald Reagan-era politics and disapproval of military funding of artificial intelligence. He believes political systems will use AI to "terrorize people". Hinton has petitioned against lethal autonomous weapons. Regarding existential risk from artificial intelligence, Hinton has stated that superintelligence seems more than 50 years away, but warns that "there is not a good track record of less intelligent things controlling things of greater intelligence". Asked in 2015 why he continues research despite his grave concerns, Hinton stated "I could give you the usual arguments. But the truth is that the prospect of discovery is too sweet." Hinton has also stated that "It is very hard to predict beyond five years" what advances AI will bring.
- Anon (2015) Hinton, Prof. Geoffrey Everest. ukwhoswho.com. Who's Who (online Oxford University Press ed.). A & C Black, an imprint of Bloomsbury Publishing plc. doi:10.1093/ww/9780199540884.013.20261 (subscription required)
- Geoffrey Hinton publications indexed by Google Scholar
- Geoffrey Hinton at the Mathematics Genealogy Project
- Geoffrey E. Hinton's Academic Genealogy
- Gregory, R. L.; Murrell, J. N. (2006). "Hugh Christopher Longuet-Higgins". Biographical Memoirs of Fellows of the Royal Society. 52: 149. doi:10.1098/rsbm.2006.0012.
- Zemel, Richard Stanley (1994). A minimum description length framework for unsupervised learning. proquest.com (PhD thesis). University of Toronto. OCLC 222081343.
- Frey, Brendan John (1998). Bayesian networks for pattern classification, data compression, and channel coding. proquest.com (PhD thesis). University of Toronto. OCLC 46557340.
- Neal, Radford (1995). Bayesian learning for neural networks. proquest.com (PhD thesis). University of Toronto. OCLC 46499792.
- Salakhutdinov, Ruslan (2009). Learning deep generative models. proquest.com (PhD thesis). University of Toronto. ISBN 9780494610800. OCLC 785764071.
- Sutskever, Ilya (2013). Training Recurrent Neural Networks. proquest.com (PhD thesis). University of Toronto. OCLC 889910425.
- Anon (1998). "Professor Geoffrey Hinton FRS". London: Royal Society. Archived from the original on 2015-11-03. One or more of the preceding sentences incorporates text from the royalsociety.org website where:
"All text published under the heading 'Biography' on Fellow profile pages is available under Creative Commons Attribution 4.0 International License." --"Royal Society Terms, conditions and policies". Archived from the original on 11 November 2016. Retrieved 2016-03-09.CS1 maint: BOT: original-url status unknown (link)
- Daniela Hernandez (7 May 2013). "The Man Behind the Google Brain: Andrew Ng and the Quest for the New AI". Wired. Retrieved 10 May 2013.
- "Geoffrey E. Hinton – Google AI". Google AI.
- Rumelhart, David E.; Hinton, Geoffrey E.; Williams, Ronald J. (1986-10-09). "Learning representations by back-propagating errors". Nature. 323 (6088): 533–536. doi:10.1038/323533a0. ISSN 1476-4687.
- "Geoffrey Hinton was briefly a Google intern in 2012 because of bureaucracy – TechCrunch". techcrunch.com. Retrieved 2018-03-28.
- Somers, James. "Progress in AI seems like it's accelerating, but here's why it could be plateauing". MIT Technology Review. Retrieved 2018-03-28.
- "How U of T's 'godfather' of deep learning is reimagining AI". University of Toronto News. Retrieved 2018-03-28.
- "'Godfather' of deep learning is reimagining AI". Retrieved 2018-03-28.
- "Geoffrey Hinton, the 'godfather' of deep learning, on AlphaGo". Macleans.ca. 2016-03-18. Retrieved 2018-03-28.
- Dave Gershgorn (18 June 2018). "The inside story of how AI got good enough to dominate Silicon Valley". Quartz. Retrieved 5 October 2018.
- Krizhevsky, Alex; Sutskever, Ilya; Hinton, Geoffrey E. (2012-12-03). "ImageNet classification with deep convolutional neural networks". Curran Associates Inc.: 1097–1105.
- "How a Toronto professor's research revolutionized artificial intelligence | Toronto Star". thestar.com. Retrieved 2018-03-13.
- Hinton, Geoffrey Everest (1977). Relaxation and its role in vision. lib.ed.ac.uk (PhD thesis). University of Edinburgh. hdl:1842/8121. OCLC 18656113. EThOS uk.bl.ethos.482889.
- Smith, Craig S. (23 June 2017). "The Man Who Helped Turn Toronto into a High-Tech Hotbed". The New York Times. Retrieved 27 June 2017.
- "U of T neural networks start-up acquired by Google" (Press release). Toronto, ON. 12 March 2013. Retrieved 13 March 2013.
- Geoffrey Hinton publications indexed by the Scopus bibliographic database. (subscription required)
- Ackley, David H; Hinton Geoffrey E; Sejnowski, Terrence J (1985), "A learning algorithm for Boltzmann machines", Cognitive science, Elsevier, 9 (1): 147–169
- Hinton, Geoffrey E. "Geoffrey E. Hinton's Publications in Reverse Chronological Order".
- Sabour, Sara; Frosst, Nicholas; Hinton, Geoffrey. October 2017. "Dynamic Routing Between Capsules"
- "Matrix capsules with EM routing" November 3, 2017. OpenReview.net
- Geib, Claudia. November 2nd 2017. "We’ve Finally Created an AI Network That’s Been Decades in the Making" Futurism.com
- "Yann LeCun's Research and Contributions". yann.lecun.com. Retrieved 2018-03-13.
- "Current and Previous Recipients". David E. Rumelhart Prize.
- Anon (1998). "Certificate of election EC/1998/21: Geoffrey Everest Hinton". London: Royal Society. Archived from the original on 5 November 2015.
- "Artificial intelligence scientist gets M prize". CBC News. 14 February 2011.
- "National Academy of Engineering Elects 80 Members and 22 Foreign Members". NAE. 8 February 2016.
- "2016 IEEE Medals and Recognitions Recipients and Citations" (PDF). IEEE. Retrieved July 7, 2016.
- The Isaac Newton of logic
- Salt, George (1978). "Howard Everest Hinton". Biographical Memoirs of Fellows of the Royal Society. 24 (0): 150–182. doi:10.1098/rsbm.1978.0006. ISSN 0080-4606.
- Shute, Joe (26 August 2017). "The 'Godfather of AI' on making machines clever and whether robots really will learn to kill us all?". The Telegraph. Retrieved 20 December 2017.
- Shute, Joe (26 August 2017). "The 'Godfather of AI' on making machines clever and whether robots really will learn to kill us all?". The Telegraph. Retrieved 30 January 2018.
- Khatchadourian, Raffi (16 November 2015). "The Doomsday Invention". The New Yorker. Retrieved 30 January 2018.