Loss function

From Wikipedia, the free encyclopedia
  (Redirected from Zero-one loss function)
Jump to navigation Jump to search

In mathematical optimization, statistics, econometrics, decision theory, machine learning and computational neuroscience, a loss function or cost function is a function that maps an event or values of one or more variables onto a real number intuitively representing some "cost" associated with the event. An optimization problem seeks to minimize a loss function. An objective function is either a loss function or its negative (in specific domains, variously called a reward function, a profit function, a utility function, a fitness function, etc.), in which case it is to be maximized.

In statistics, typically a loss function is used for parameter estimation, and the event in question is some function of the difference between estimated and true values for an instance of data. The concept, as old as Laplace, was reintroduced in statistics by Abraham Wald in the middle of the 20th century.[1] In the context of economics, for example, this is usually economic cost or regret. In classification, it is the penalty for an incorrect classification of an example. In actuarial science, it is used in an insurance context to model benefits paid over premiums, particularly since the works of Harald Cramér in the 1920s.[2] In optimal control, the loss is the penalty for failing to achieve a desired value. In financial risk management, the function is mapped to a monetary loss.

In classical statistics (both frequentist and Bayesian), a loss function is typically treated as something of a background mathematical convention. Critics such as W. Edwards Deming and Nassim Nicholas Taleb have argued that loss functions require much greater attention than they have traditionally been given and that loss functions used in real world decision making need to reflect actual empirical experience. They argue that real-world loss functions are often very different from the smooth, symmetric ones used by classical convention, and are often highly asymmetric, nonlinear, and discontinuous.

Examples[edit]

Regret[edit]

Leonard J. Savage argued that using non-Bayesian methods such as minimax, the loss function should be based on the idea of regret, i.e., the loss associated with a decision should be the difference between the consequences of the best decision that could have been made had the underlying circumstances been known and the decision that was in fact taken before they were known.

Quadratic loss function[edit]

The use of a quadratic loss function is common, for example when using least squares techniques. It is often more mathematically tractable than other loss functions because of the properties of variances, as well as being symmetric: an error above the target causes the same loss as the same magnitude of error below the target. If the target is t, then a quadratic loss function is

for some constant C; the value of the constant makes no difference to a decision, and can be ignored by setting it equal to 1.

Many common statistics, including t-tests, regression models, design of experiments, and much else, use least squares methods applied using linear regression theory, which is based on the quadratric loss function.

The quadratic loss function is also used in linear-quadratic optimal control problems. In these problems, even in the absence of uncertainty, it may not be possible to achieve the desired values of all target variables. Often loss is expressed as a quadratic form in the deviations of the variables of interest from their desired values; this approach is tractable because it results in linear first-order conditions. In the context of stochastic control, the expected value of the quadratic form is used.

0-1 loss function[edit]

In statistics and decision theory, a frequently used loss function is the 0-1 loss function

where is the indicator function.

Expected loss[edit]

In some contexts, the value of the loss function itself is a random quantity because it depends on the outcome of a random variable X.

Statistics[edit]

Both frequentist and Bayesian statistical theory involve making a decision based on the expected value of the loss function; however, this quantity is defined differently under the two paradigms.

Frequentist expected loss[edit]

We first define the expected loss in the frequentist context. It is obtained by taking the expected value with respect to the probability distribution, Pθ, of the observed data, X. This is also referred to as the risk function[3][4][5][6] of the decision rule δ and the parameter θ. Here the decision rule depends on the outcome of X. The risk function is given by:

Here, θ is a fixed but possibly unknown state of nature, X is a vector of observations stochastically drawn from a population, is the expectation over all population values of X, dPθ is a probability measure over the event space of X (parametrized by θ) and the integral is evaluated over the entire support of X.

Bayesian expected loss[edit]

In a Bayesian approach, the expectation is calculated using the posterior distribution π* of the parameter θ:

One then should choose the action a* which minimises the expected loss. Although this will result in choosing the same action as would be chosen using the frequentist risk, the emphasis of the Bayesian approach is that one is only interested in choosing the optimal action under the actual observed data, whereas choosing the actual frequentist optimal decision rule, which is a function of all possible observations, is a much more difficult problem.

Examples in statistics[edit]

  • For a scalar parameter θ, a decision function whose output is an estimate of θ, and a quadratic loss function
the risk function becomes the mean squared error of the estimate,
the risk function becomes the mean integrated squared error

Economic choice under uncertainty[edit]

In economics, decision-making under uncertainty is often modelled using the von Neumann-Morgenstern utility function of the uncertain variable of interest, such as end-of-period wealth. Since the value of this variable is uncertain, so is the value of the utility function; it is the expected value of utility that is maximized.

Decision rules[edit]

A decision rule makes a choice using an optimality criterion. Some commonly used criteria are:

  • Minimax: Choose the decision rule with the lowest worst loss — that is, minimize the worst-case (maximum possible) loss:
  • Invariance: Choose the optimal decision rule which satisfies an invariance requirement.
  • Choose the decision rule with the lowest average loss (i.e. minimize the expected value of the loss function):

Selecting a loss function[edit]

Sound statistical practice requires selecting an estimator consistent with the actual acceptable variation experienced in the context of a particular applied problem. Thus, in the applied use of loss functions, selecting which statistical method to use to model an applied problem depends on knowing the losses that will be experienced from being wrong under the problem's particular circumstances.[7]

A common example involves estimating "location". Under typical statistical assumptions, the mean or average is the statistic for estimating location that minimizes the expected loss experienced under the squared-error loss function, while the median is the estimator that minimizes expected loss experienced under the absolute-difference loss function. Still different estimators would be optimal under other, less common circumstances.

In economics, when an agent is risk neutral, the objective function is simply expressed as the expected value of a monetary quantity, such as profit, income, or end-of-period wealth.

But for risk-averse (or risk-loving) agents, loss is measured as the negative of a utility function, and the objective function to be optimized is the expected value of utility.

Other measures of cost are possible, for example mortality or morbidity in the field of public health or safety engineering.

For most optimization algorithms, it is desirable to have a loss function that is globally continuous and differentiable.

Two very commonly used loss functions are the squared loss, , and the absolute loss, . However the absolute loss has the disadvantage that it is not differentiable at . The squared loss has the disadvantage that it has the tendency to be dominated by outliers—when summing over a set of 's (as in ), the final sum tends to be the result of a few particularly large a-values, rather than an expression of the average a-value.

The choice of a loss function is not arbitrary. It is very restrictive and sometimes the loss function may be characterized by its desirable properties.[8] Among the choice principles are, for example, the requirement of completeness of the class of symmetric statistics in the case of i.i.d. observations, the principle of complete information, and some others.

W. Edwards Deming and Nassim Nicholas Taleb argue that empirical reality, not nice mathematical properties, should be the sole basis for selecting loss functions, and real losses often aren’t mathematically nice and aren’t differentiable, continuous, symmetric, etc. For example, a person who arrives before a plane gate closure can still make the plane, but a person who arrives after can not, a discontinuity and asymmetry which makes arriving slightly late much more costly than arriving slightly early. In drug dosing, the cost of too little drug may be lack of efficacy, while the cost of too much may be tolerable toxicity, another example of assymmetry. Traffic, pipes, beams, ecologies, climates, etc. may tolerate increased load or stress with little noticeable change up to a point, then become backed up or break catastrophically. These situations, Deming and Taleb argue, are common in real-life problems, perhaps more common than classical smooth, continuous, symmetric, differentials cases.[citation needed]

Taleb especially has argued that the practice of selecting loss functions based on mathematical niceness rather than actual loss experience is not really different from selecting data based on niceness rather than empirical observation, in other words, Taleb has argued that it should be considered a kind of scientific fraud.[citation needed]

See also[edit]

References[edit]

  1. ^ Wald, A. (1950). Statistical Decision Functions. Wiley.
  2. ^ Cramér, H. (1930). On the mathematical theory of risk. Centraltryckeriet.
  3. ^ Nikulin, M.S. (2001) [1994], "Risk of a statistical procedure", in Hazewinkel, Michiel, Encyclopedia of Mathematics, Springer Science+Business Media B.V. / Kluwer Academic Publishers, ISBN 978-1-55608-010-4
  4. ^ Berger, James O. (1985). Statistical decision theory and Bayesian Analysis (2nd ed.). New York: Springer-Verlag. ISBN 0-387-96098-8. MR 0804611.
  5. ^ DeGroot, Morris (2004) [1970]. Optimal Statistical Decisions. Wiley Classics Library. ISBN 0-471-68029-X. MR 2288194.
  6. ^ Robert, Christian P. (2007). The Bayesian Choice (2nd ed.). New York: Springer. doi:10.1007/0-387-71599-1. ISBN 0-387-95231-4. MR 1835885.
  7. ^ Pfanzagl, J. (1994). Parametric Statistical Theory. Berlin: Walter de Gruyter. ISBN 3-11-013863-8.
  8. ^ Detailed information on mathematical principles of the loss function choice is given in Chapter 2 of the book Klebanov, B.; Rachev, Svetlozat T.; Fabozzi, Frank J. (2009). Robust and Non-Robust Models in Statistics. New York: Nova Scientific Publishers, Inc. (and references there).

Further reading[edit]

  • Cecchetti, Stephen G, 2000. "Making Monetary Policy: Objectives and Rules," Oxford Review of Economic Policy, Oxford University Press, vol. 16(4), pages 43-59, Winter.
  • Horowitz, Ann R., 1987. "Loss functions and public policy," Journal of Macroeconomics, Elsevier, vol. 9(4), pages 489-504.
  • Waud, Roger N, 1976. "Asymmetric Policymaker Utility Functions and Optimal Policy Under Uncertainty," Econometrica, Econometric Society, vol. 44(1), pages 53-66, January.