Bayesian optimization

From Wikipedia, the free encyclopedia
Jump to navigation Jump to search

Bayesian optimization is a sequential design strategy for global optimization of black-box functions[1] that doesn't require derivatives.

History[edit]

The term is generally attributed to Jonas Mockus and is coined in his work from a series of publications on global optimization in the 1970s and 1980s.[2][3][4]

Strategy[edit]

Since the objective function is unknown, the Bayesian strategy is to treat it as a random function and place a prior over it. The prior captures our beliefs about the behaviour of the function. After gathering the function evaluations, which are treated as data, the prior is updated to form the posterior distribution over the objective function. The posterior distribution, in turn, is used to construct an acquisition function (often also referred to as infill sampling criteria) that determines what the next query point should be.

Examples[edit]

Examples of acquisition functions include probability of improvement, expected improvement, Bayesian expected losses, upper confidence bounds (UCB), Thompson sampling and mixtures of these.[5] They all trade-off exploration and exploitation so as to minimize the number of function queries. As such, Bayesian optimization is well suited for functions that are very expensive to evaluate.

Solution methods[edit]

The maximum of the acquisition function is typically found by resorting to discretization or by means of an auxiliary optimizer.

Applications[edit]

The approach has been applied to solve a wide range of problems,[6] including learning to rank,[7] interactive animation,[8] robotics,[9][10][11][12] sensor networks,[13][14] automatic algorithm configuration,[15] automatic machine learning toolboxes,[16][17][18] reinforcement learning, planning, visual attention, architecture configuration in deep learning, static program analysis, experimental particle physics,[19] etc.

See also[edit]

References[edit]

  1. ^ Jonas Mockus (2013). Bayesian approach to global optimization: theory and applications. Kluwer Academic.
  2. ^ Jonas Mockus: On Bayesian Methods for Seeking the Extremum. Optimization Techniques 1974: 400-404
  3. ^ Jonas Mockus: On Bayesian Methods for Seeking the Extremum and their Application. IFIP Congress 1977: 195-200
  4. ^ J. Mockus, Bayesian Approach to Global Optimization. Kluwer Academic Publishers, Dordrecht, 1989
  5. ^ Matthew W. Hoffman, Eric Brochu, Nando de Freitas: Portfolio Allocation for Bayesian Optimization. Uncertainty in Artificial Intelligence (UAI): 327–336 (2011)
  6. ^ Eric Brochu, Vlad M. Cora, Nando de Freitas: A Tutorial on Bayesian Optimization of Expensive Cost Functions, with Application to Active User Modeling and Hierarchical Reinforcement Learning. CoRR abs/1012.2599 (2010)
  7. ^ Eric Brochu, Nando de Freitas, Abhijeet Ghosh: Active Preference Learning with Discrete Choice Data. NIPS 2007
  8. ^ Eric Brochu, Tyson Brochu, Nando de Freitas: A Bayesian Interactive Optimization Approach to Procedural Animation Design. Symposium on Computer Animation 2010: 103–112
  9. ^ Daniel J. Lizotte, Tao Wang, Michael H. Bowling, Dale Schuurmans: Automatic Gait Optimization with Gaussian Process Regression. IJCAI 2007: 944–949
  10. ^ Ruben Martinez-Cantin, Nando de Freitas, Eric Brochu, Jose Castellanos and Arnaud Doucet. A Bayesian exploration-exploitation approach for optimal online sensing and planning with a visually guided mobile robot. Autonomous Robots. Volume 27, Issue 2, pp 93–103 (2009)
  11. ^ Scott Kuindersma, Roderic Grupen, and Andrew Barto. Variable Risk Control via Stochastic Optimization. International Journal of Robotics Research, volume 32, number 7, pp 806–825 (2013)
  12. ^ Roberto Calandra, André Seyfarth, Jan Peters, and Marc P. Deisenroth Bayesian optimization for learning gaits under uncertainty. Ann. Math. Artif. Intell. Volume 76, Issue 1, pp 5-23 (2016) DOI:10.1007/s10472-015-9463-9
  13. ^ Niranjan Srinivas, Andreas Krause, Sham M. Kakade, Matthias W. Seeger: Information-Theoretic Regret Bounds for Gaussian Process Optimization in the Bandit Setting. IEEE Transactions on Information Theory 58(5):3250–3265 (2012)
  14. ^ Roman Garnett, Michael A. Osborne, Stephen J. Roberts: Bayesian optimization for sensor set selection. IPSN 2010: 209–219
  15. ^ Frank Hutter, Holger Hoos, and Kevin Leyton-Brown (2011). Sequential model-based optimization for general algorithm configuration, Learning and Intelligent Optimization
  16. ^ J. Bergstra, D. Yamins, D. D. Cox (2013). Hyperopt: A Python Library for Optimizing the Hyperparameters of Machine Learning Algorithms. Proc. SciPy 2013.
  17. ^ Chris Thornton, Frank Hutter, Holger H. Hoos, Kevin Leyton-Brown: Auto-WEKA: combined selection and hyperparameter optimization of classification algorithms. KDD 2013: 847–855
  18. ^ Jasper Snoek, Hugo Larochelle and Ryan Prescott Adams. Practical Bayesian Optimization of Machine Learning Algorithms. Neural Information Processing Systems, 2012
  19. ^ Philip Ilten, Mike Williams, Yunjie Yang. Event generator tuning using Bayesian optimization. 2017 JINST 12 P04028. DOI: 10.1088/1748-0221/12/04/P04028

External links[edit]

  • BayesOpt, NIPS workshop on Bayesian Optimization (BayesOpt).
  • GPyOpt, Python open-source library for Bayesian Optimization based on GPy.
  • Bayesopt, an efficient implementation in C/C++ with support for Python, Matlab and Octave.
  • Spearmint, a Python implementation focused on parallel and cluster computing.
  • Hyperopt, a Python implementation for hyperparamenter optimization.
  • SMAC, a Java implementation of random-forest-based Bayesian optimization for general algorithm configuration.
  • pybo, a Python implementation of modular Bayesian optimization.
  • Bayesopt.m, a Matlab implementation of Bayesian optimization with or without constraints.
  • MOE MOE is a Python/C++/CUDA implementation of Bayesian Global Optimization using Gaussian Processes.
  • SigOpt SigOpt offers Bayesian Global Optimization as a SaaS service focused on enterprise use cases.
  • Metaopt, a Python implementation of hyperparameter optimization focused on parallel and cluster computing.
  • Mind Foundry OPTaaS offers Bayesian Global Optimization via web-services with flexible parameter constraints.