OpenAI

From Wikipedia, the free encyclopedia
Jump to navigation Jump to search
OpenAI
OpenAI Logo.svg
FoundedDecember 11, 2015; 3 years ago (2015-12-11)
FoundersElon Musk, Sam Altman
Type501(c)(3) nonprofit organization[1][2]
Location
ProductsOpenAI Gym
Key people
Ilya Sutskever, Greg Brockman
Endowment$1 billion pledged (2015)
Websitewww.openai.com

OpenAI is a non-profit artificial intelligence (AI) research organization that aims to promote and develop friendly AI in such a way as to benefit humanity as a whole. Founded in late 2015, the San Francisco-based organization aims to “freely collaborate” with other institutions and researchers by making its patents and research open to the public.[4][5] The founders (notably Elon Musk and Sam Altman) are motivated in part by concerns about existential risk from artificial general intelligence.[6][7]

History[edit]

In October 2015, Musk, Altman and other investors announced the formation of the organization, pledging over US$1 billion to the venture.[4]

On April 27, 2016, OpenAI released a public beta of “OpenAI Gym”, its platform for reinforcement learning research.[8]

On December 5, 2016, OpenAI released Universe, a software platform for measuring and training an AI's general intelligence across the world's supply of games, websites and other applications.[9][10][11][12] On February 21, 2018, Musk resigned his board seat, citing “a potential future conflict (of interest)” with Tesla AI development for self driving cars, but remained a donor.[13]

As of 2018, OpenAI is headquartered in San Francisco's Mission District, sharing an office building with Neuralink, another company co-founded by Musk.[14]

Participants[edit]

Other backers of the project include:[16]

Companies:

The group started in early January 2016 with nine researchers. According to Wired, Brockman met with Yoshua Bengio, one of the “founding fathers” of the deep learning movement, and drew up a list of the “best researchers in the field”. Microsoft's Peter Lee stated that the cost of a top AI researcher exceeds the cost of a top NFL quarterback prospect. While OpenAI pays corporate-level (rather than nonprofit-level) salaries, it doesn't currently pay AI researchers salaries comparable to those of Facebook or Google. Nevertheless, Sutskever stated that he was willing to leave Google for OpenAI “partly of because of the very strong group of people and, to a very large extent, because of its mission.” Brockman stated that “the best thing that I could imagine doing was moving humanity closer to building real AI in a safe way.” OpenAI researcher Wojciech Zaremba stated that he turned down “borderline crazy” offers of two to three times his market value to join OpenAI instead.[7]

Motives[edit]

Some scientists, such as Stephen Hawking and Stuart Russell, believed that if advanced AI someday gains the ability to re-design itself at an ever-increasing rate, an unstoppable “intelligence explosion” could lead to human extinction. Musk characterizes AI as humanity's "biggest existential threat."[18] OpenAI's founders structured it as a non-profit so that they could focus its research on creating a positive long-term human impact.[4]

OpenAI states that “it's hard to fathom how much human-level AI could benefit society,” and that it's equally difficult to comprehend “how much it could damage society if built or used incorrectly”.[4] Research on safety cannot safely be postponed: “because of AI's surprising history, it's hard to predict when human-level AI might come within reach.”[19] OpenAI states that AI “should be an extension of individual human wills and, in the spirit of liberty, as broadly and evenly distributed as possible...”,[4] and which sentiment has been expressed elsewhere in reference to a potentially enormous class of AI-enabled products: "Are we really willing to let our society be infiltrated by autonomous software and hardware agents whose details of operation are known only to a select few? Of course not." [20] Co-chair Sam Altman expects the decades-long project to surpass human intelligence.[21]

Vishal Sikka, former CEO of Infosys, stated that an “openness” where the endeavor would “produce results generally in the greater interest of humanity” was a fundamental requirement for his support, and that OpenAI “aligns very nicely with our long-held values” and their “endeavor to do purposeful work”.[22] Cade Metz of Wired suggests that corporations such as Amazon may be motivated by a desire to use open-source software and data to level the playing field against corporations such as Google and Facebook that own enormous supplies of proprietary data. Altman states that Y Combinator companies will share their data with OpenAI.[21]

Strategy[edit]

Musk posed the question: “what is the best thing we can do to ensure the future is good? We could sit on the sidelines or we can encourage regulatory oversight, or we could participate with the right structure with people who care deeply about developing AI in a way that is safe and is beneficial to humanity.” Musk acknowledged that “there is always some risk that in actually trying to advance (friendly) AI we may create the thing we are concerned about”; nonetheless, the best defense is “to empower as many people as possible to have AI. If everyone has AI powers, then there's not any one person or a small set of individuals who can have AI superpower.”[16]

Musk and Altman's counter-intuitive strategy of trying to reduce the risk that AI will cause overall harm, by giving AI to everyone, is controversial among those who are concerned with existential risk from artificial intelligence. Philosopher Nick Bostrom is skeptical of Musk's approach: “If you have a button that could do bad things to the world, you don't want to give it to everyone.”[7] During a 2016 conversation about the technological singularity, Altman said that “we don’t plan to release all of our source code” and mentioned a plan to “allow wide swaths of the world to elect representatives to a new governance board”. Greg Brockman stated that “Our goal right now... is to do the best thing there is to do. It’s a little vague.”[23]

Products[edit]

Gym[edit]

Gym aims to provide an easy-to-setup general-intelligence benchmark with a wide variety of different environments - somewhat akin to, but broader than, the ImageNet Large Scale Visual Recognition Challenge used in supervised learning research - and that hopes to standardize the way in which environments are defined in AI research publications, so that published research becomes more easily reproducible.[8][24] The project claims to provide the user with a simple interface. As of June 2017, the gym can only be used with Python.[25] As of September 2017, the gym documentation site was not maintained, and active work focused instead on its GitHub page.[26]

RoboSumo[edit]

In “RoboSumo”, virtual humanoid “metalearning” robots initially lack knowledge of how to even walk, and given the goals of learning to move around, and pushing the opposing agent out of the ring. Through this adversarial learning process, the agents learn how to adapt to changing conditions; when an agent is then removed from this virtual environment and placed in a new virtual environment with high winds, the agent braces to remain upright, suggesting it had learned how to balance in a generalized way.[27][28] OpenAI's Igor Mordatch argues for that competition between agents can create an intelligence “arms race” that can increase an agent's ability to function, even outside the context of the competition.

Debate Game[edit]

In 2018, OpenAI launched the Debate Game, which teaches machines to debate toy problems in front of a human judge. The purpose is to research whether such an approach may assist in auditing AI decisions and in developing explainable AI.[29][30]

OpenAI Five[edit]

OpenAI Five is the name of a team of five OpenAI-curated bots that are used in the competitive five-on-five video game Dota 2, who learn to play against human players at a high skill level entirely through trial-and-error algorithms. Before becoming a team of five, the first public demonstration occurred at The International 2017, the annual premiere championship tournament for the game, where Dendi, a professional Ukrainian player of the game, lost against a bot in a live 1v1 matchup.[31][32] After the match, CTO Greg Brockman explained that the bot had learned by playing against itself for two weeks of real time, and that the learning software was a step in the direction of creating software that can handle complex tasks “like being a surgeon”.[33][34] The system uses a form of reinforcement learning, as the bots learn over time by playing against themselves hundreds of times a day for months, and are rewarded for actions such as killing an enemy and destroying towers.[35][36][37] By June 2018, the ability of the bots expanded to play together as a full team of five and were able to defeat teams of amateur and semi-professional players.[38][39][40][41] At The International 2018, OpenAI Five played in two games against professional players.[42][43] Although the bots lost both games, OpenAI considered it a successful venture, stating that playing against some of the best players in Dota 2 allowed them to analyze and adjust their algorithms for future games.[44]

Dactyl[edit]

Dactyl uses machine learning to train a robot Shadow Hand from scratch, using the same reinforcement learning algorithm code that OpenAI Five uses. The robot hand is trained entirely in physically inaccurate simulation.[45][46]

See also[edit]

References[edit]

  1. ^ Levy, Steven (December 11, 2015). "How Elon Musk and Y Combinator Plan to Stop Computers From Taking Over". Medium/Backchannel. Retrieved December 11, 2015. “Elon Musk: ...we came to the conclusion that having a 501(c)(3)... would probably be a good thing to do”
  2. ^ Greg Brockman
  3. ^ Markoff, John (December 11, 2015). "Artificial-Intelligence Research Center Is Founded by Silicon Valley Investors". The New York Times. Retrieved December 12, 2015.
  4. ^ a b c d e "Tech giants pledge $1bn for 'altruistic AI' venture, OpenAI". BBC News. 12 December 2015. Retrieved 19 December 2015.
  5. ^ "Introducing OpenAI". OpenAI Blog. 12 December 2015.
  6. ^ Lewontin, Max (14 December 2015). "Open AI: Effort to democratize artificial intelligence research?". The Christian Science Monitor. Retrieved 19 December 2015.
  7. ^ a b c Cade Metz (27 April 2016). "Inside OpenAI, Elon Musk's Wild Plan to Set Artificial Intelligence Free". Wired magazine. Retrieved 28 April 2016.
  8. ^ a b Dave Gershgorn (27 April 2016). "Elon Musk's Artificial Intelligence Group Opens A 'Gym' To Train A.I." Popular Science. Retrieved 29 April 2016.
  9. ^ Metz, Cade. "Elon Musk's Lab Wants to Teach Computers to Use Apps Just Like Humans Do". WIRED. Retrieved 31 December 2016.
  10. ^ Mannes, John. "OpenAI's Universe is the fun parent every artificial intelligence deserves". TechCrunch. Retrieved 31 December 2016.
  11. ^ "OpenAI - Universe". Retrieved 31 December 2016.
  12. ^ Claburn, Thomas. "Elon Musk-backed OpenAI reveals Universe – a universal training ground for computers". The Register. Retrieved 31 December 2016.
  13. ^ https://www.theverge.com/2018/2/21/17036214/elon-musk-openai-ai-safety-leaves-board
  14. ^ Conger, Kate. "Elon Musk's Neuralink Sought to Open an Animal Testing Facility in San Francisco". Gizmodo. Retrieved 2018-10-11.
  15. ^ Kraft, Amy (14 December 2015). "Elon Musk invests in $1B effort to thwart the dangers of AI". CBS News. Retrieved 19 December 2015.
  16. ^ a b c d "Silicon Valley investors to bankroll artificial-intelligence center". The Seattle Times. 13 December 2015. Retrieved 19 December 2015.
  17. ^ a b Liedtke, Michael. "Elon Musk, Peter Thiel, Reid Hoffman, others back $1 billion OpenAI research center". San Jose Mercury News. Retrieved 19 December 2015.
  18. ^ Anthony Patch (2018-03-10), Elon Musk Artificial Intelligence could be our biggest existential threat 720p, retrieved 2018-10-31
  19. ^ Mendoza, Jessica. "Tech leaders launch nonprofit to save the world from killer robots". The Christian Science Monitor.
  20. ^ Glenn W. Smith (10 April 2018). "Re: Sex-Bots—Let Us Look before We Leap". Arts. doi:10.3390/arts7020015. Retrieved 30 June 2018.
  21. ^ a b Metz, Cade (15 December 2015). "Elon Musk's Billion-Dollar AI Plan Is About Far More Than Saving the World". Wired. Retrieved 19 December 2015. “Altman said they expect this decades-long project to surpass human intelligence.”
  22. ^ Vishal Sikka (14 December 2015). "OpenAI: AI for All". InfyTalk. Infosys. Archived from the original on 22 December 2015. Retrieved 22 December 2015.
  23. ^ "Sam Altman's Manifest Destiny". The New Yorker (10 October 2016). Retrieved 4 October 2016.
  24. ^ Greg Brockman; John Schulman (27 April 2016). "OpenAI Gym Beta". OpenAI Blog. OpenAI. Retrieved 29 April 2016.
  25. ^ "OpenAI Gym". GitHub. Retrieved 8 May 2017.
  26. ^ Brockman, Greg (12 Sep 2017). "Yep, the Github repo has been the focus of the project for the past year. The Gym site looks cool but hasn't been maintained". @gdb. Retrieved 2017-11-07.
  27. ^ "AI Sumo Wrestlers Could Make Future Robots More Nimble". Wired. 11 October 2017. Retrieved 2 November 2017.
  28. ^ "OpenAI's Goofy Sumo-Wrestling Bots Are Smarter Than They Look". MIT Technology Review. Retrieved 2 November 2017.
  29. ^ Greene, Tristan (2018-05-04). "OpenAI's Debate Game teaches you and your friends how to lie like robots". The Next Web. Retrieved 2018-05-31.
  30. ^ "Why Scientists Think AI Systems Should Debate Each Other". Fast Company. 8 May 2018. Retrieved 2 June 2018.
  31. ^ Savov, Vlad. "My favorite game has been invaded by killer AI bots and Elon Musk hype". The Verge. Retrieved June 25, 2018.
  32. ^ Frank, Blair Hanley. "OpenAI's bot beats top Dota 2 player so badly that he quits". Venture Beat. Archived from the original on August 12, 2017. Retrieved August 12, 2017.
  33. ^ "Dota 2". blog.openai.com. Retrieved 12 August 2017.
  34. ^ "More on Dota 2". blog.openai.com. Retrieved 16 August 2017.
  35. ^ Simonite, Tom. "Can Bots Outwit Humans in One of the Biggest Esports Games?". Wired. Retrieved June 25, 2018.
  36. ^ Kahn, Jeremy. "A Bot Backed by Elon Musk Has Made an AI Breakthrough in Video Game World". Bloomberg. Retrieved June 27, 2018.
  37. ^ Clifford, Catherine. "Bill Gates says gamer bots from Elon Musk-backed nonprofit are 'huge milestone' in A.I." CNBC. Retrieved June 29, 2018.
  38. ^ "OpenAI Five Benchmark". blog.openai.com. Retrieved 25 August 2018.
  39. ^ Simonite, Tom. "Can Bots Outwit Humans in One of the Biggest Esports Games?". Wired. Retrieved 25 June 2018.
  40. ^ Vincent, James. "AI bots trained for 180 years a day to beat humans at Dota 2". The Verge. Retrieved 25 June 2018.
  41. ^ Savov, Vlad. "The OpenAI Dota 2 bots just defeated a team of former pros". The Verge. Retrieved August 7, 2018.
  42. ^ Simonite, Tom. "Pro Gamers Fend off Elon Musk-Backed AI Bots—for Now". Wired. Retrieved 25 August 2018.
  43. ^ Quach, Katyanna. "Game over, machines: Humans defeat OpenAI bots once again at video games Olympics". The Register. Retrieved 25 August 2018.
  44. ^ "The International 2018: Results". blog.openai.com. Retrieved 25 August 2018.
  45. ^ "Learning Dexterity". Openai.com. OpenAI. Retrieved 27 August 2018.
  46. ^ Ryan, Mae (2018). "How Robot Hands Are Evolving to Do What Ours Can". Retrieved 1 September 2018.

External links[edit]