Thursday, June 26, 2008

Uncertainty Made Easy

About this Article
This paper by Ian Instone was first presented at the Institution of Electrical Engineers, London in October 1996 at their Colloquium entitled "Uncertainties made easy".
Note: Details of the current versions of recommended uncertainty guidance publications can be found on the Uncertainty Resources page in this section.


Simplified Method for Assessing Uncertainties in a Commercial, Production Environment

Introduction

With the introduction of Edition 8 of NAMAS document NIS3003 (1) and the inclusion of the principles outlined in the ISO Guide to the Expression of Uncertainty in Measurement (2) the assessment of uncertainties of measurement has become a task more suited to a mathematician rather than the average calibration engineer. In some companies with small calibration departments it might be possible for all of the engineers to be re-educated in assessment of uncertainties, however, in larger laboratories it is more usual for various engineers to become specialist in certain aspects of the calibration process. This paper aims to demonstrate a simplified approach to uncertainty assessment which falls broadly within the guidelines set out in both NIS3003 and the ISO Guide.

One of the first stumbling blocks in NIS3003 is the necessity to derive a measurement equation. Whilst it is agreed that this is a useful skill which might demonstrate a more thorough understanding of the measurement principles, it seems only to serve as an additional step in the uncertainty assessment process, steps which were not thought necessary in previous 7 editions of NIS3003. The next step, deriving sensitivity coefficients by the use of partial differentiation will cause most calibration engineers to reach for the mathematics text book. Fortunately, in many cases these two steps can be replaced using a more practical approach. A list of contributions to the uncertainty budget can be used in place of the measurement equation and each term may be partially differentiated by varying the quantity over its range and measuring its influence on the measurand. For instance, it may have been determined that temperature variations have an influence upon the quantity being measured then, rather than produce a measurement equation which includes temperature and partially differentiate it, one can simply perform the measurement, change the temperature by the specified amount and re-measure. The resultant change in the measurand becomes a contribution to the uncertainty budget. There are also cases where the same approach may be used but where there may be no necessity to perform the measurements to obtain the data. For instance, many resistors have a temperature coefficient specification, in the form of ±N parts per million per degree Celsius. Assuming the temperature is controlled to within ±2°C the change in the value of the resistor due to temperature fluctuations will be given by:-

N parts per million x 2°C

Most contributions to an uncertainty budget can be assessed using either method. The practical method described will often yield smaller values because they are based on measurements performed on only a small quantity of items, where as the latter method is based upon the equipment specification which should cover the entire population of that instrument and so will normally produce larger contributions to the uncertainty budget.

Type-A Uncertainties

In a commercial calibration laboratory often it is not economical to perform several sets of measurements on a particular instrument solely to produce a value for the random (Type-A) uncertainty contribution. The alternative method shown in NIS3003 is preferred and usually employed where possible. In cases where multiple measurements are performed it is usual practice to calculate the standard deviation of the population. The estimated standard deviation of the uncorrected mean of the measurand is then calculated using:

Esd = Psd / sqrt(N)

Where:
Esd is the estimated standard deviation of the uncorrected mean of the measurand
Psd is the standard deviation of the population of values
N is the quantity of repeated measurements

When the quantity of measurements to be performed on the equipment being calibrated is limited to one set of measurements then N in the equation above will be 1. The standard deviation of the population Psd will previously have been determined from an earlier Type-A evaluation based upon a large number of repeated measurements. In an ideal world the measurements would be repeated on several instruments within the family and the worst case standard deviation used in the Type-A assessment. In practice however, providing the assessment techniques outlined in this paper are employed, the Type-A contribution to the uncertainty budget can often be shown to be negligible so the need to make a very large number of repeated measurements is reduced.

In the ideal world where customers are willing to pay unlimited amounts of money for their calibrations, or where we have very large quantities of similar instruments to calibrate it is a fairly simple matter to measure several instruments many times and obtain a good reliable estimate for the standard deviation. In reality, customers have limited budgets and calibration laboratories rarely have even small quantities of particular instruments which can be used for extensive testing to provide a good and reliable estimate of the standard deviation. Another simpler, and more cost effective method is required.

Before embarking upon the assessment of uncertainties we need to understand exactly what our customer is expecting of their calibration report and what use they will make of it. For the majority of simple reference standards such as resistors, standard cells, capacitors etc. it is likely that the measured values will be used by the customer so an uncertainty assessment as defined by NIS3003 will be required. For the great majority of instruments it is often not possible to make any use of the values obtained during its calibration so it is usually only necessary to provide a calibration which demonstrates that the instrument is operating within its specifications. In these cases it is usually not necessary to provide measurements with the lowest measurement uncertainties, which allows some compromises to be made.

ISO 10012-1 (3) suggests that we should aim for an accuracy ratio between uncertainty and the instrument being calibrated of greater than 3:1. The American interpretation of ISO Guide 25 (4), ANSI Z540-1 (5) suggests that uncertainties become significant when the accuracy ratio is less than 4:1. If we assume that the instrument specification has the same coverage factor as the uncertainty the following expression would describe the resultant combination of the uncertainty and specification which should be used when the instrument is used to make measurements:

§ = sqrt [ S 2 + U 2 ]

Where:
§ is the resultant expanded specification resulting from the calibration
S is the specification of the parameter being measured
U is the uncertainty of measurement when performing the calibration

In the cases where S >= 4U the effect of the uncertainty upon the specification is shown to be negligible, for instance assume that S = 8 and U = 2 then:

§ = sqrt [ 8 2 + 2 2 ]
= sqrt [ 64 + 4 ]
= sqrt [ 68 ]
= 8.25

Therefore, with an accuracy ratio of 4:1 the effective specification expands by 3.1%. As most uncertainties are only quoted using two figures it is unlikely that this small increase would have any effect. Repeating the same with an accuracy ratio of 3:1 produces an increase of only 5.2%.

The same analogy can be used when assessing the significance of a particular uncertainty contribution. Type-A uncertainties are those assessed using statistical methods usually based on many sets of measurements, thereby making them the most expensive to assess. Using the model above we can show that Type-A uncertainties are insignificant when they are less than 30% of the magnitude of the Type-B uncertainties:

Total Uncertainty = Type-B uncertainties where Type-A <>

and:

Effective Specification = Specification where Total Uncert <>

From above we can show that Type-A uncertainties can be regarded as insignificant when they are less than 0.09 of the specification being tested, or in approximate terms Type-A uncertainties can be regarded as negligible when they are less than 10% of the specification.

Verifying that an uncertainty contribution is less than a given value is usually much easier than assessing the precise magnitude of it. One method described in an earlier paper (6) normally requires only two complete sets of measurements to be made on the same instrument. One set of measurements are then subtracted, one measurement at a time from the other set. The largest difference is then assumed to be a conservative estimate of the Type-A uncertainty contribution. This technique has been verified many times against uncertainties assessed in the traditional way and has always produced an acceptable conservative estimate of the Type-A contribution, providing that an adequate quantity of measurements are compared across the range. Assuming that the comparison produces no values that are outside the limits defined earlier (10% of the DUT specification or 30% of the Type-B uncertainty estimate) it can be assumed that the Type-A uncertainties are not significant. To provide good confidence and consistency in the assessment process the value defined as insignificant should always be included in the assessment.

It is also possible to use values for the Type-A assessment gained from other, related instruments providing some knowledge of the construction of the instrument under test is available. For instance, it may be that a laboratory has already assessed a certain 50MHz to 18GHz signal generator and verified that the Type-A uncertainty contribution meets the criteria outlined above. A 12GHz signal generator from the same family is then submitted for assessment. In this case, providing the two signal generators share similar designs, and use similar hardware and layouts, and the same test methods and equipment are used it would be reasonable to employ the 18GHz Type-A assessment on both generators. In other cases it might be possible to refer to published data for certain Type-A contributions.

In cases where these techniques reveal that the Type-A contributions are significant (as defined above) the uncertainty assessment should be performed in the usual way using many repeated measurements.

Sensitivity Coefficient

In most cases sensitivity coefficients can be assumed to be 1. However there are some notable exceptions where other values will be used. One of these relates to the measurement of resolution bandwidth on a spectrum analyzer. In this case we have measurement uncertainties expressed in two different units; measurements of amplitude are expressed as an amplitude ratio (usually in dB units) and measurements of frequency (in Hz.). The bandwidth measurement is often performed by applying a "pure" signal to the analyzer's input and setting the controls so that the signal shown below is visible. The envelope describes the shape of the filter and normally we would measure the 3dB or 30% (below the reference) point of it (shown on the left of the figure below). To assess the sensitivity coefficient we need to determine the gradient of the graph at the measurement point. Spectrum analyzers often have an amplitude specification of 0.1dB per 1dB, therefore the amplitude uncertainty at 3dB will be ±0.3dB or ±7%. We then move ±7% from the 70% point and read off the resultant change in frequency.

The resultant change in frequency due to amplitude uncertainty is: ±3.8 frequency units. Since this value has been found for an amplitude specification of ±0.3dB it will have a sensitivity coefficient of 1.

Fig.1 -- Bandwidth measurement (determining the sensitivity coefficient)

On the right of the figure is a similar construction for assessing the frequency uncertainty due to the amplitude uncertainty when the 6dB (50%) point is measured. In this case the amplitude uncertainty increases to ±0.6dB (0.1x6). As a linear ratio this equates to ±13%.

Reading from the graph this represents a frequency uncertainty of ±6 frequency units.

Assessing the uncertainty contributions in this way greatly reduces the possibility of errors as might occur if following the theory using partial differentiation. In addition a practical technique such as this is preferred by most calibration engineers.

Other empirical means of obtaining values for the uncertainty budget may also be employed. For instance it might be possible to establish a value for temperature coefficient by changing the environmental temperature by a few degrees. In this case we could derive a sensitivity coefficient for the output signal in terms of temperature change.

Total Uncertainty Budget

One of the principle benefits of the latest revision of NIS3003 is the strong suggestion that all of the uncertainty contributions should be listed in a table along with their probability distribution. Whilst at first sight this seems a tedious task, it pays dividends in the future because it makes the contributors to the budget absolutely clear. The table below shows a typical example of an uncertainty assessment for a microwave power measurement at 18GHz using a thermocouple power sensor. These types of power sensor measure power levels relative to a known power so a 1mW, 50MHz power reference is included on the power meter for this purpose. In most cases it is simpler and more correct to use a measuring instruments specification rather than try to apply corrections and assess the resultant uncertainty. For the majority of measurements it is not possible to make corrections based upon a calibration report as that report only indicates the instruments calibration status at the time it was measured and only when operated in that particular mode described on the certificate. It is not possible to predict the errors at any other points.

Symbol

Source of Uncertainty

Value
± %

Probability Distribution

Divisor

Ci

Ui
± %

K

Calibration factor at 18 GHz

2.5

normal

2

1

1.25

D

Drift since last calibration

0.5

rectangular

sqrt(3)

1

0.29

I

Instrumentation Uncertainty

0.5

normal

2

1

0.25

R

50 MHz Reference spec.

1.2

rectangular

sqrt(3)

1

0.69

Mismatch loss uncertainties:

M1

Sensor to 50 MHz Reference

0.2

U-shaped

sqrt(2)

1

0.14

M2

Sensor to 18 GHz Generator

5.9

U-shaped

sqrt(2)

1

4.17


A

Type-A Uncertainties

2.1

normal

2

1

1.05








UC

Combined Standard Uncertainty


normal



4.55

U

Expanded Uncertainty


normal (k=2)



9.10

Where:

Ci is the sensitivity coefficient used to multiply the input quantities to express them in terms of the output quantity.

Ui is the standard uncertainty resulting from the input quantity.

The standard uncertainties are combined using the usual root-sum-squares method and then multiplied be the appropriate coverage factor (in this case k=2). In some cases it will be appropriate to use a different coverage factor, perhaps when a 95% confidence level is not adequate, or sometimes when the input quantities are shown to be "unreliable". The Vi (degrees of freedom of the standard uncertainty) or Veff (effective degrees of freedom) column has not been included in the table above in order to simplify the assessment process.

Degrees of Freedom

Degrees of freedom is a term used to indicate confidence in the quality of the estimate of a particular input quantity to the uncertainty budget. For the majority of calibrations performed under controlled conditions there will be no need to consider degrees of freedom and a coverage factor of k=2 will be used. In cases where the Type-A uncertainty has been assessed using very few measurements a different coverage factor, using the degrees of freedom, would normally be calculated. However, whilst the assessment method proposed in this paper is based on only two sets of measurements being performed experimental data confirms that this treatment (taking the worst case difference) produces a reliable, conservative estimate of the Type-A uncertainties. In most cases the degrees of freedom can be assumed to be infinite and the evaluation of the t factor using the Welch-Satterwaite equation would not be necessary. NIS3003 provides some guidance on using these methods but does stress that normally it is not necessary to employ them.

Conclusion

The uncertainty assessment method described in this paper have been employed at Hewlett-Packard's UK Service Center for several years. External, internal and informal measurement audits have in every case provided confirmation that the uncertainties are being estimated with the expected level of confidence. This simplified approach is easier to understand and use which enables more calibration engineers to contribute fully to the uncertainty assessments.

References
  1. The expression of Uncertainty and Confidence in Measurement for Calibrations, NIS3003 Edition 8 May 1995.
  2. Guide to the Expression of Uncertainty in Measurement. BIPM, IEC, IFCC, ISO, IUPAC, OIML. International Organization for Standardization. ISBN 92-67-10188-9. BSI Equivalent: "Vocabulary for Metrology, Part 3. Guide to the Expression of Uncertainty in Measurement", BSI PD 6461: 1995.
  3. Quality assurance requirements for measuring equipment, Part 1. Metrological confirmation system for measuring equipment, ISO 10012-1:1992.
  4. Calibration Laboratories and Measuring and Test Equipment - General Requirements ANSI/NCSL Z540-1-1994.
  5. General requirements for the competence of calibration and testing laboratories, International Organization for Standardization, ISO Guide 25:1990.
  6. Calculating the Uncertainty of a Single Measurement from IEE Colloquium on "Uncertainties in Electrical Measurements", 11 May 1993, Author Ian Instone, Hewlett-Packard.

5 comments:

Anonymous said...

If you want to get the best home style, you always have the choice to get the skills
of an established interior designer or home designer.

In this respect, Carpenter Sellers Del Gatto Architects are very good in providing the best interior design in Las Vegas at the most reasonable price.
Retail interior design experts will create attractive yet practical stands, units and counters which add to the atmosphere of the store.



Here is my web blog ... http://Interiordecorationideas.net

Anonymous said...

00 in Christmas money, and I was finally going to have my Jack Lalanne Power
Juicer, but I was going to have the black and stainless steel deluxe model that I thought
I could never afford. Due to this, even my mom loves using the juicing device and she operates the
device with ease. You may still be at the stage
where you associate greens with money, rather than with vegetables.


My blog post; juicer and food processor

Anonymous said...

It's actually a great and helpful piece of info. I'm satisfied that you simply shared
this useful information with us. Please keep us up to date like this.
Thanks for sharing.

My web site ... oldt.info

Anonymous said...

I usually do not drop many comments, but i did
some searching and wound up here "Uncertainty Made Easy".
And I actually do have a couple of questions for you if it's allright. Could it be just me or does it look like some of these remarks come across like they are coming from brain dead folks? :-P And, if you are writing on additional social sites, I'd like
to follow anything fresh you have to post. Could you make a list of every one of all your communal pages like your Facebook page,
twitter feed, or linkedin profile?

Feel free to surf to my page; Lemon Juicer

Anonymous said...

I've been exploring for a bit for any high quality articles or weblog posts in this sort of space . Exploring in Yahoo I finally stumbled upon this site. Reading this info So i'm happy
to convey that I've a very just right uncanny feeling I discovered exactly what I needed. I such a lot no doubt will make sure to don?t forget this web site and provides it a glance on a constant basis.

Stop by my weblog ... snoring mask