The term "accuracy" is routinely used to describe a wide array of performance specifications when talking about measuring instruments. The designer, manufacturer and user of a particular instrument may use the same word, but can mean entirely different things, with associated varying expectations.
If someone indicates that they want a ozone monitor with a range up to 100 parts per million (ppm) with an accuracy of 1.0 ppm, what do they really want?
Does the 1.0 ppm refer to total deviation (+/- 0.5 ppm) or, does he/she want to reach his target within 1.0 ppm (+/- 2.0 ppm)?
Do they require consistency by having repetitions with ±1.0?
Is the user looking for resolution of 1.0 ppm?
How linear must the output be?
Output variations with changes in temperature be of concern?
What effect will EMI and RFI have on the signal?
Accuracy is often used as a catch phrase for many of the following terms:
Resolution - the smallest distinguishable, discrete unit. If the resolution of a sensing system is greater than separation of the reading or indicator, then "accuracy" has no relevant meaning. If on the other hand, the resolution is too fine, the user may be paying for something he does not need and may pay a price when it comes to response time or stability. Generally specified as a percentage of Full Scale (F.S.), resolution is usually indicated using the term "less than or equal to" (<=).
Repeatability - the figure describing an instrument's ability to achieve the same result, in repeated tests from the same direction. Under identical conditions, specifications state the tolerance within which, the device will give the same output signal in repetitive cycles.
Without this information, resolution loses its practical meaning. What would be the purpose of excellent resolution if the tolerance for repeating the output signal was, for example, greater than the resolution? Repeatability is generally specified as a percentage for Full Scale (F.S.), with ± understood.
Non-Linearity - the deviation from straight-line output vs linear input. With most gas sensing devices the output is advertised as "linear" or "linearized." With electrochemical sensors, output vs concentration is very close to being linear. With solid state an catalytic sensors, outputs are far from linear, but may be linearized (output is modified to compensate for the response curve of the sensor). Non-linearity is generally specified as a percentage of F.S.
Temperature Drift - the variation in output readings for as a function of temperature changes. Temperature drift is one of the more simple "accuracy" parameters with the exception of the fact that there is no uniformity in the way it is specified by manufacturers of sensors and transducers. Typically, it can be specified as ± XX % full scale (or ppm) per degree F or degree C. This figure can have a great effect on final readings and should therefore be carefully taken into consideration.
Noise - the variation superimposed on the output signal resulting from either outside influences such as RFI, ground loop feedback, power source variations EMI, etc., or inherent eccentricities of the device itself. Because of the nature of noise, it cannot be specified and the general rule is to try to figure out the source of the noise and minimize it. In general terms, noise becomes more of an issue as resolution becomes tighter.
The information above shows that accuracy is a term that can mean many things to different people. As such, it should be used sparingly when discussing sensing devices.
Last Updated: October 1, 2012