Generative adversarial network
Generative adversarial networks (GANs) are a class of artificial intelligence algorithms used in unsupervised machine learning, implemented by a system of two neural networks contesting with each other in a zero-sum game framework. They were introduced by Ian Goodfellow et al. in 2014.[1] This technique can generate photographs that look at least superficially authentic to human observers, having many realistic characteristics (though in tests people can tell real from generated in many cases).[2]
Method[edit]
One network generates candidates (generative) and the other evaluates them (discriminative).[3][4][5][6] Typically, the generative network learns to map from a latent space to a particular data distribution of interest, while the discriminative network discriminates between instances from the true data distribution and candidates produced by the generator. The generative network's training objective is to increase the error rate of the discriminative network (i.e., "fool" the discriminator network by producing novel synthesised instances that appear to have come from the true data distribution).[3][7]
In practice, a known dataset serves as the initial training data for the discriminator. Training the discriminator involves presenting it with samples from the dataset, until it reaches some level of accuracy. Typically the generator is seeded with a randomized input that is sampled from a predefined latent space[4] (e.g. a multivariate normal distribution). Thereafter, samples synthesized by the generator are evaluated by the discriminator. Backpropagation is applied in both networks [5] so that the generator produces better images, while the discriminator becomes more skilled at flagging synthetic images.[8] The generator is typically a deconvolutional neural network, and the discriminator is a convolutional neural network.
The idea to infer models in a competitive setting (model versus discriminator) was proposed by Li, Gauci and Gross in 2013.[9] Their method is used for behavioral inference. It is termed Turing Learning,[10] as the setting is akin to that of a Turing test. Turing Learning is a generalization of GANs.[11] Models other than neural networks can be considered. Moreover, the discriminators are allowed to influence the processes from which the datasets are obtained, making them active interrogators as in the Turing test. The idea of adversarial training can also be found in earlier works, such as Schmidhuber in 1992.[12]
Application[edit]
GANs have been used to produce samples of photorealistic images for the purposes of visualizing new interior/industrial design, shoes, bags and clothing items or items for computer games' scenes.[citation needed] These networks were reported to be used by Facebook.[13] Recently, GANs have modeled patterns of motion in video.[14] They have also been used to reconstruct 3D models of objects from images[15] and to improve astronomical images.[16] In 2017 a fully convolutional feedforward GAN was used for image enhancement using automated texture synthesis in combination with perceptual loss. The system focused on realistic textures rather than pixel-accuracy. The result was a higher image quality at high magnification.[17]
GANs were used to create the 2018 painting Edmond de Belamy which sold for $432,500.[18]
Furthermore, GANs were used in the following applications:
- Automatic face aging [19]
- Video gaming applications:
Starting from late December of 2018, GAN methodology had amassed immense popularity and recognition in the PC video game modding community, as a new/fresh method of up-scaling low resolution 2D textures in old video games by recreating them as modern custom assets in ultra high resolutions (4K or higher) via AI image training, and then down-sampling them to fit the game's own native resolution (essentially ending in a result similar to the "Super Sampling" anti-aliasing method). If AI training is conducted properly, the GAN methodology has proven to be able to provide a clearer and sharper 2D texture image assets magnitudes higher in overall quality than the game's original developer-intended content during the game's initial release, while fully retaining original's level of details, colors, and etc. Known examples of extensive GAN usage include (but aren't limited to) Final Fantasy VIII, Final Fantasy IX, Resident Evil REmake HD Remaster, Max Payne. Since at the current time being GAN methodology is being used primarily only for 2D texture modding, this can be applied only to PC-exclusive game releases or to games ported to PC from consoles, as console games usually don't allow tampering with game's files or modding new content in without prior hacking.
An extensive list of applications using GANs can be found here.
References[edit]
- ^ Goodfellow, Ian; Pouget-Abadie, Jean; Mirza, Mehdi; Xu, Bing; Warde-Farley, David; Ozair, Sherjil; Courville, Aaron; Bengio, Yoshua (2014). "Generative Adversarial Networks". arXiv:1406.2661 [cs.LG].
- ^ Salimans, Tim; Goodfellow, Ian; Zaremba, Wojciech; Cheung, Vicki; Radford, Alec; Chen, Xi (2016). "Improved Techniques for Training GANs". arXiv:1606.03498 [cs.LG].
- ^ a b Goodfellow, Ian J.; Pouget-Abadie, Jean; Mirza, Mehdi; Xu, Bing; Warde-Farley, David; Ozair, Sherjil; Courville, Aaron; Bengio, Yoshua (2014). "Generative Adversarial Networks". arXiv:1406.2661 [stat.ML].
- ^ a b Thaler, SL, US Patent 05659666, Device for the autonomous generation of useful information, 08/19/1997.
- ^ a b Thaler, SL, US Patent, 07454388, Device for the autonomous bootstrapping of useful information, 11/18/2008.
- ^ Thaler, SL, The Creativity Machine Paradigm, Encyclopedia of Creativity, Invention, Innovation, and Entrepreneurship, (ed.) E.G. Carayannis, Springer Science+Business Media, LLC, 2013.
- ^ Luc, Pauline; Couprie, Camille; Chintala, Soumith; Verbeek, Jakob (2016-11-25). "Semantic Segmentation using Adversarial Networks". NIPS Workshop on Adversarial Training, Dec , Barcelona, Spain. 2016. arXiv:1611.08408. Bibcode:2016arXiv161108408L.
- ^ Andrej Karpathy, Pieter Abbeel, Greg Brockman, Peter Chen, Vicki Cheung, Rocky Duan, Ian Goodfellow, Durk Kingma, Jonathan Ho, Rein Houthooft, Tim Salimans, John Schulman, Ilya Sutskever, And Wojciech Zaremba, Generative Models, OpenAI, retrieved April 7, 2016CS1 maint: Uses authors parameter (link)
- ^ Li, Wei; Gauci, Melvin; Gross, Roderich (July 6, 2013). "A Coevolutionary Approach to Learn Animal Behavior Through Controlled Interaction". Proceedings of the 15th Annual Conference on Genetic and Evolutionary Computation (GECCO 2013). Amsterdam, The Netherlands: ACM. pp. 223–230.
- ^ Li, Wei; Gauci, Melvin; Groß, Roderich (30 August 2016). "Turing learning: a metric-free approach to inferring behavior and its application to swarms". Swarm Intelligence. 10 (3): 211–243. doi:10.1007/s11721-016-0126-1.
- ^ Gross, Roderich; Gu, Yue; Li, Wei; Gauci, Melvin (December 6, 2017). "Generalizing GANs: A Turing Perspective". Proceedings of the Thirty-first Annual Conference on Neural Information Processing Systems (NIPS 2017). Long Beach, CA, USA. pp. 1–11.
- ^ Schmidhuber, Jürgen (November 1992). "Learning Factorial Codes by Predictability Minimization". Neural Computation. 4 (6): 863–879. doi:10.1162/neco.1992.4.6.863.
- ^ Greenemeier, Larry (June 20, 2016). "When Will Computers Have Common Sense? Ask Facebook". Scientific American. Retrieved July 31, 2016.
- ^ Vondrick, Carl; Pirsiavash, Hamed; Torralba, Antonio (2016). "Generating Videos with Scene Dynamics". carlvondrick.com. Bibcode:2016arXiv160902612V.
- ^ "3D Generative Adversarial Network". 3dgan.csail.mit.edu.
- ^ Schawinski, Kevin; Zhang, Ce; Zhang, Hantian; Fowler, Lucas; Santhanam, Gokula Krishnan (2017-02-01). "Generative Adversarial Networks recover features in astrophysical images of galaxies beyond the deconvolution limit". Monthly Notices of the Royal Astronomical Society: Letters. 467 (1): L110–L114. arXiv:1702.00403. Bibcode:2017MNRAS.467L.110S. doi:10.1093/mnrasl/slx008.
- ^ Sajjadi, Mehdi S. M.; Schölkopf, Bernhard; Hirsch, Michael (2016-12-23). "EnhanceNet: Single Image Super-Resolution Through Automated Texture Synthesis". arXiv:1612.07919 [cs.CV].
- ^ Cohn, Gabe (2018-10-25). "AI Art at Christie's Sells for $432,500". The New York Times.
- ^ Antipov, Grigory; Baccouche, Moez; Dugelay, Jean-Luc. "Face Aging With Conditional Generative Adversarial Networks". arXiv:1702.01983.
External links[edit]
- Knight, Will. "What to expect of artificial intelligence in 2017". MIT Technology Review. Retrieved 2017-01-05.