Thursday, June 26, 2008

Uncertainty Myths

MYTH TRUTH
—Uncertainty—
ISO17025 requires that measured values and measurement uncertainty is reported on a certificate. This is true if the certificate does not include a statement concerning the equipment's compliance to a stated specification. In this case, section 5-10-4 says that the results and uncertainty must be maintained by the lab.
We need to determine our own measurement uncertainty so need to know the calibration lab's uncertainty.

If the calibration confirmed that the instrument met the manufacturer's specification, the effect of uncertainty on that status decision has already been taken into account (as required by ISO17025, para.5-10-4-2). In this case, the user's own uncertainty budget starts with the product specification and the calibration uncertainty is not included again.

If the calibrated item does not have a specification (i.e. the certificate provides only measured values) then the cal lab's uncertainty will need to be included in the user's own uncertainty analysis.

The need to know "uncertainty" is new. We've been certified against ISO9001:1994 for years and have never been asked before.

You've just been lucky or were satisfactorily meeting the requirement without realizing it !

Look again at clause 4-11-1; it clearly states that "...equipment shall be used in a manner which ensures that the measurement uncertainty is known and is consistent with the required measurement capability."

For the majority of instrument users, the requirement is readily satisfied by referring to the equipment specifications. In general terms, the specification is the user's uncertainty.

The uncertainties that an accredited lab will report on a certificate are published in their Scope/Schedule. The published capability represents the best (smallest possible) measurement uncertainties, perhaps applicable to particular characteristics and types of tested equipment. It's very unlikely that those figures would be assigned to all calibrations made assuming a wide variety of models are seen. Until measurements are made, it may not be possible for the cal lab to estimate the uncertainty that will be assigned because the unit-under-test contributes to the uncertainty.

Published "best measurement uncertainty" can never be achieved because it assumes an ideal unit-under-test.

In the past there have been different practices allowed by the various conformity assessment schemes. However, the European co-operation for Accreditation publication EA-4/02 (refer to Uncertainty Resources in this Basics section) recognizes that harmonization was required and, in Appendix A, establishes definitions.

This means that, certainly within Europe, best measurement uncertainty (BMC) must include contributions associated with the normal characteristics of equipment they expect to calibrate. For example, it's not acceptable to base the uncertainty of an attenuation measurement on a device having an assumed perfect match. Some BMC's are qualified with the phrase "nearly ideal" regarding the test item but this means that the capability does not depend upon the item's characteristics and that such perfect items are available and routinely seen by the lab.

Calibrations without uncertainty are not traceable. It is true that the internationally agreed definition of traceability includes a need for the uncertainty of the comparisons to be stated. However, it doesn't mean that a calibration certificate must include uncertainty (or measured values), as is allowed by ISO17025 and other standards if a specification compliance statement is used, although this information must be maintained by the lab.
By using a correction based on the instrument's error as determined by calibration, the working specification can be tightened. This effectively minimizes the user's own measurement uncertainty to that of the calibrating lab.

The equipment manufacturer specifications cannot be ignored. For instance, they include allowances for drift over time and environmental conditions. In contrast, the calibration represents a performance assessment at that time and in particular conditions. Yet the myth dangerously assumes that the "error" is constant despite these variables.

No comments: