Yann LeCun

From Wikipedia, the free encyclopedia
Jump to navigation Jump to search
Yann LeCun
Yann LeCun at the University of Minnesota.jpg
Born (1960-07-08) July 8, 1960 (age 58)
Alma materPierre and Marie Curie University
Known forDeep learning
Scientific career
InstitutionsBell Labs
New York University
Facebook Artificial Intelligence Research
ThesisModèles connexionnistes de l'apprentissage (connectionist learning models) (1987)
Doctoral advisorMaurice Milgram
Websiteyann.lecun.com

Yann LeCun (/ləˈkʌn/;[1] born 1960) is a French computer scientist working primarily in the fields of machine learning, computer vision, mobile robotics, and computational neuroscience. He is the Chief Artificial Intelligence Scientist at Facebook AI Research, and he is well known for his work on optical character recognition and computer vision using convolutional neural networks (CNN), and is a founding father of convolutional nets.[2][3] He is also one of the main creators of the DjVu image compression technology (together with Léon Bottou and Patrick Haffner). He co-developed the Lush programming language with Léon Bottou.

Life[edit]

Yann LeCun was born near Paris, France, in 1960. He received a Diplôme d'Ingénieur from the Ecole Superieure d'Ingénieur en Electrotechnique et Electronique (ESIEE), Paris in 1983, and a PhD in Computer Science from Université Pierre et Marie Curie in 1987 during which he proposed an early form of the back-propagation learning algorithm for neural networks.[4]

He was a postdoctoral research associate in Geoffrey Hinton's lab at the University of Toronto from 1987 to 1988.

In 1988, he joined the Adaptive Systems Research Department at AT&T Bell Laboratories in Holmdel, New Jersey, United States, headed by Lawrence D. Jackel, where he developed a number of new machine learning methods, such as a biologically inspired model of image recognition called Convolutional Neural Networks,[5] the "Optimal Brain Damage" regularization methods,[6] and the Graph Transformer Networks method (similar to conditional random field), which he applied to handwriting recognition and OCR.[7] The bank check recognition system that he helped develop was widely deployed by NCR and other companies, reading over 10% of all the checks in the US in the late 1990s and early 2000s.

In 1996, he joined AT&T Labs-Research as head of the Image Processing Research Department, which was part of Lawrence Rabiner's Speech and Image Processing Research Lab, and worked primarily on the DjVu image compression technology,[8] used by many websites, notably the Internet Archive, to distribute scanned documents. His collaborators at AT&T include Léon Bottou and Vladimir Vapnik.

After a brief tenure as a Fellow of the NEC Research Institute (now NEC-Labs America) in Princeton, NJ, he joined New York University (NYU) in 2003, where he is Silver Professor of Computer Science Neural Science at the Courant Institute of Mathematical Science and the Center for Neural Science. He is also a professor at the Tandon School of Engineering.[9][10] At NYU, he has worked primarily on Energy-Based Models for supervised and unsupervised learning,[11] feature learning for object recognition in Computer Vision,[12] and mobile robotics.[13]

In 2012, he became the founding director of the NYU Center for Data Science.[14] On December 9, 2013, LeCun became the first director of Facebook AI Research in New York City,[15][16] and stepped down from the NYU-CDS directorship in early 2014.

LeCun is a member of the US National Academy of Engineering, the recipient of the 2014 IEEE Neural Network Pioneer Award and the 2015 PAMI Distinguished Researcher Award.

In 2013, he and Yoshua Bengio co-founded the International Conference on Learning Representations, which adopted a post-publication open review process he previously advocated on his website. He was the chair and organizer of the "Learning Workshop" held every year between 1986 and 2012 in Snowbird, Utah. He is a member of the Science Advisory Board of the Institute for Pure and Applied Mathematics[17] at UCLA. He is the Co-Director of the Learning in Machines and Brain research program (formerly Neural Computation & Adaptive Perception) of CIFAR[18]

In 2016, he was the visiting professor of computer science on the "Chaire Annuelle Informatique et Sciences Numériques" at Collège de France in Paris. His "leçon inaugurale" (inaugural lecture) was an important event in 2016 Paris intellectual life.[citation needed] On October 11, he was awarded Doctor Honoris Causa by the IPN in Mexico City.[19]

In 2017, LeCun declined an invitation to lecture at the King Abdullah University of Science and Technology in Saudi Arabia because he believed he would be considered a terrorist in the country in view of his atheism.[20] In September 2018, he received the Harold Pender Award given by the University of Pennsylvania.

In October 2018, he received Doctor Honoris Causa degree from EPFL [21] [22].

References[edit]

  1. ^ Fun Stuff - Yann LeCun
  2. ^ Convolutional Nets and CIFAR-10: An Interview with Yann LeCun. Kaggle 2014
  3. ^ LeCun, Yann; Léon Bottou; Yoshua Bengio; Patrick Haffner (1998). "Gradient-based learning applied to document recognition" (PDF). Proceedings of the IEEE. 86 (11): 2278–2324. doi:10.1109/5.726791. Retrieved 16 November 2013.
  4. ^ Y. LeCun: Une procédure d'apprentissage pour réseau a seuil asymmetrique (a Learning Scheme for Asymmetric Threshold Networks), Proceedings of Cognitiva 85, 599–604, Paris, France, 1985.
  5. ^ Y. LeCun, B. Boser, J. S. Denker, D. Henderson, R. E. Howard, W. Hubbard and L. D. Jackel: Backpropagation Applied to Handwritten Zip Code Recognition, Neural Computation, 1(4):541-551, Winter 1989.
  6. ^ Yann LeCun, J. S. Denker, S. Solla, R. E. Howard and L. D. Jackel: Optimal Brain Damage, in Touretzky, David (Eds), Advances in Neural Information Processing Systems 2 (NIPS*89), Morgan Kaufmann, Denver, CO, 1990.
  7. ^ Yann LeCun, Léon Bottou, Yoshua Bengio and Patrick Haffner: Gradient Based Learning Applied to Document Recognition, Proceedings of IEEE, 86(11):2278–2324, 1998.
  8. ^ Léon Bottou, Patrick Haffner, Paul G. Howard, Patrice Simard, Yoshua Bengio and Yann LeCun: High Quality Document Image Compression with DjVu, Journal of Electronic Imaging, 7(3):410–425, 1998.
  9. ^ "People - Electrical and Computer Engineering". Polytechnic Institute of New York University. Retrieved 13 March 2013.
  10. ^ http://yann.lecun.com/
  11. ^ Yann LeCun, Sumit Chopra, Raia Hadsell, Ranzato Marc'Aurelio and Fu-Jie Huang: A Tutorial on Energy-Based Learning, in Bakir, G. and Hofman, T. and Schölkopf, B. and Smola, A. and Taskar, B. (Eds), Predicting Structured Data, MIT Press, 2006.
  12. ^ Kevin Jarrett, Koray Kavukcuoglu, Marc'Aurelio Ranzato and Yann LeCun: What is the Best Multi-Stage Architecture for Object Recognition?, Proc. International Conference on Computer Vision (ICCV'09), IEEE, 2009
  13. ^ Raia Hadsell, Pierre Sermanet, Marco Scoffier, Ayse Erkan, Koray Kavackuoglu, Urs Muller and Yann LeCun: Learning Long-Range Vision for Autonomous Off-Road Driving, Journal of Field Robotics, 26(2):120–144, February 2009.
  14. ^ http://cds.nyu.edu
  15. ^ https://www.facebook.com/yann.lecun/posts/10151728212367143
  16. ^ "DIRECTOR OF AI RESEARCH". facebook. 2016. Archived from the original on April 26, 2017.
  17. ^ http://www.ipam.ucla.edu/programs/gss2012/ Institute for Pure and Applied Mathematics
  18. ^ "Neural Computation & Adaptive Perception Advisory Committee Yann LeCun". CIFAR. Retrieved 16 December 2013.
  19. ^ "Primera generación de Doctorados Honoris Causa en el IPN". Retrieved 11 October 2016.
  20. ^ Manas Sen Gupta (22 May 2017). "The Reason Why Facebook's AI Research Director Did Not Visit Saudi Arabia Has Set The Internet On Fire". TopYaps. Retrieved 28 December 2017.
  21. ^ "EPFL celebrates 1,043 new Master's graduates". Retrieved 27 January 2019.
  22. ^ "Yann LeCun @EPFL - "Self-supervised learning: could machines learn like humans?"". Retrieved 27 January 2019.

External links[edit]