Integral linearity

From Wikipedia, the free encyclopedia
Jump to navigation Jump to search

A measurement system consists of a sensor, to input the physical parameter that is of interest, and an output to a medium that is suitable for reading by the system that needs to know the value of the parameter. (This could be a device to convert the temperature of the surrounding air or water into the visually readable height of a column of mercury in a small tube, for example; but the conversion could also be made to an electronic encoding of the parameter, for reading by a computer system.)

The integral linearity is then a measure of the fidelity of the conversion that is performed by the measuring system. It is the relation of the output to the input over a range expressed as a percentage of the full scale measurements. Integral linearity is a measure of the device's deviation from ideal linear behaviour.

The most common denotion of integral linearity is independent linearity.

In the context of a Digital-to-analog converter (DAC) or an Analog-to-digital converter (ADC), independent linearity is fitted to minimize the deviation with respect to the ideal behaviour with no constraints. Other types of integral linearity place constraints on the symmetry or end points of the linear fit with respect to the actual data.[1][2]

Notes[edit]

  1. ^ Kolts, Bertram S. (2005). "Understanding Linearity and Monotonicity" (PDF). analogZONE. Archived from the original (PDF) on February 4, 2012. Retrieved September 24, 2014.
  2. ^ Kolts, Bertram S. (2005). "Understanding Linearity and Monotonicity". Foreign Electronic Measurement Technology. 24 (5): 30–31. Retrieved September 25, 2014.