Sunday, June 29, 2008

Why Calibrate ? Or "Calibration? How does that help me?"

British scientist Lord Kelvin (William Thomson 1824-1907) is quoted from his lecture to the Institution of Civil Engineers, 3 May 1883...
"I often say that when you can measure what you are speaking about and express it in numbers you know something about it; but when you cannot express it in numbers your knowledge is a meager and unsatisfactory kind; it may be the beginning of knowledge but you have scarcely, in your thoughts, advanced to the stage of science, whatever the matter may be."

This famous remark emphasizes the importance that measurement has in science, industry and commerce. We all use and depend upon it every day in even the most mundane aspects of life -- from setting your wristwatch against the radio or telephone time signal, to filling the car fuel-tank or checking the weather forecast. For success, all depend upon proper calibration and traceability to national standards.

As components age and equipment undergoes changes in temperature or sustains mechanical stress, critical performance gradually degrades. This is called drift. When this happens your test results become unreliable and both design and production quality suffer. Whilst drift cannot be eliminated, it can be detected and contained through the process of calibration.

Calibration is simply the comparison of instrument performance to a standard of known accuracy. It may simply involve this determination of deviation from nominal or include correction (adjustment) to minimize the errors. Properly calibrated equipment provides confidence that your products/services meet their specifications. Calibration:

* increases production yields,
* optimizes resources,
* assures consistency and
* ensures measurements (and perhaps products) are compatible with those made elsewhere.

By making sure that your measurements are based on international standards, you promote customer acceptance of your products around the world. But if you're still looking to justify that the cost of calibration does add value, check-out some of the calibration horror stories that have been reported.

Saturday, June 28, 2008

Paperless Calibration

Bulging filesBulging filing cabinets or over-full hanging files are a common office scene. But as far as calibration records are concerned, is the "paperless office" taboo ?

What do You Keep in Your Drawers ?

Most quality managers keep calibration results and certificates in their drawers! From discussions that have taken place with many people in industry whose responsibility includes the control of test instruments, a filing cabinet full of paper calibration ‘evidence’ is an integral part of the quality system -- without which audits would fail and the business would crumble. But when pressed for a rationale for such belief, three main reasons to maintain paper records emerge:

  • They believe that auditors would not accept any alternative
  • They believe that ISO9000 or accreditation agencies demand it
  • It is historical; they have always done it and it is a comfort factor.
Alternative Feared

During these discussions a potential alternative option based around electronic records being retained by the calibration supplier, to be provided electronically on demand, was met with a mixed reaction.

On one hand, the positive aspects of fewer papers to handle, file, retain, refresh, retrieve, etc. were enthusiastically supported. However, the conflicting dilemma that such a change might have an impact on audit success tempered that initial enthusiasm. Equipment managers' fundamental belief is that both ISO/IEC Guide 25 or EN45001 (ISO17025) and ISO9000 audit bodies would not recognize or be comfortable with such a ‘virtual’ record system. This fear alone would deter them from seriously considering any such change.

This collective feedback formed the basis of a discussion between Hewlett-Packard in Britain and senior officials from the United Kingdom Accreditation Service, the agency responsible for both accrediting calibration/test labs and overseeing quality management system registrars. The goal was to establish, for the record, whether UKAS would endorse a paperless system. The outcome of this meeting is summarized in a letter to Agilent Technologies from Brian Thomas, Technical Director of UKAS, in which he summarizes that the responsibility of the user of calibration services (the customer)

"....is to be able to demonstrate to the assessor that it can, and does when needed, obtain evidence of calibration and that it has an effective records system enabling tracking back of full calibration data and certification for the defined period."

This doesn’t mean that records are necessarily kept locally by the equipment-user in paper form but that they could, indeed, be retained by the supplier of the service and provided when needed at any time in the future. In most cases, the only data a company needs in real-time relates to parameters found to be outside the instrument’s specification when initially tested (on-receipt status) so that a potential product-recall process may be invoked. But even this doesn’t need to be provided on paper -- it could be made available to the customer via the Internet (e.g. e-mail or a secure web server) or through a variety of other electronic means (fax, floppy disk, etc.).

Control is Crucial not Mechanism

Whichever medium is most appropriate, it is the evidence of control that is imperative, not the evidence of paperwork. In Brian’s words:

"In principle, your customers would be able to contract you to retain their calibration records; this arrangement would then become part of their system for retention of records. UKAS assessment of such a customer would address whether this system provided access that was easy, quick and reliable and controlled from the point of view of security, confidentiality and accuracy. Assuming this to be so in practice then the system would be acceptable to UKAS."

This alternative solution is, therefore, one which UKAS would support provided that the customer and the supplier met some key requirements. Those requirements are concisely detailed by Brian as:

"The documentation of such records and certification is acceptable in any form of medium, hard copy, electronic, etc. provided that it is legible, dated, readily identifiable, retrievable, secure and maintained in facilities that provide a suitable environment to minimize deterioration or damage and to prevent loss."

Dispelling Reluctance

So, the voice of industry is clear. It would like to take advantage of contemporary technology by contracting-out its data and certificate storage requirements and, provided that their suppliers could satisfy their needs (echoed by the needs of UKAS above), they are willing to forego historical practices by trusting virtual documentation. But the most significant reason that they are reluctant to take this step is fear of audit failure.

Agilent Technologies believe that a major step forward would be made if quality system and accreditation consultants and assessors could advise their clients that, far from impeding audit success, such a move could enhance it -- whilst at the same time saving space, time and ultimately money for both the equipment owner and calibration provider.

Calibration, Verification or Conformance?

When discussing calibration requirements with a potential supplier it's obviously important to understand what's being offered. Other articles in this section should help you to establish your requirements and distinguish the differences between available services. But one of the variations has, sometimes, even confused calibration laboratories and quality auditor. It's a matter of the difference between calibration, verification and even conformance.

Similar to the often confused specification terms accuracy and precision, a myth became "established wisdom" that calibration and verification are differentiated on the basis of quality or integrity.

* Specification terminology

Popular opinion being that verification is a quick-check of performance perhaps made without any real traceability, whereas calibration provides genuine assurance that the product really meets its specification. In fact, the US national standard ANSI/NCSL-Z540 defines "verification" as being calibration and evaluation of conformity against a specification. This definition originated with the now obsolete ISO/IEC Guide 25 but neither its replacement (ISO/IEC 17025) or the International Vocabulary of Measurement (VIM) currently have it or any alternative. The only relevant international standard that includes terminology covering the process of both calibrating and evaluating a measuring instrument's performance against established criteria is ISO10012 which uses the rather cumbersome term "metrological confirmation".

Calibration is simply the process of comparing the unknown with a reference standard and reporting the results. For example:
Applied= 1.30V, Indicated= 1.26V (or Error= -0.04V)
Calibration may include adjustment to correct any deviation from the value of the standard.

Verification, as it relates to calibration, is the comparison of the results against a specification, usually the manufacturer's published performance figures for the product. (e.g. Error= -0.04V, Spec= �0.03V, "FAIL"). Some cal labs include a spec status statement on their Certificate of Calibration. (i.e. the item did/did not comply with a particular spec).

Where no judgment is made about compliance, or correction has not been made to minimize error, it has been suggested that Certificate of Measurement would be a more descriptive title to aid recognition of the service actually performed. Some suppliers also use Certificate of Verification where no measurements are involved in the performance testing (such as for certain datacomm/protocol analyzers), rather than Certificate of Functional Test as this latter term is often perceived as simply being brief, informal checks as might be performed following a repair (often termed "operational verification").

Verification can also relate to a similar evaluation process carried out by the equipment user/owner where the calibration data are compared to allowances made in the user's uncertainty budget (e.g. for drift/stability between cals) or other criteria such as a regulation or standard peculiar to the user's own test application.

Verification is not intermediate self-checking between calibrations. Such checks are better termed confidence checks, which may also be part of a Statistical Process Control regime. The results of confidence checks may be used to redefine when a "proper" calibration is required or may prompt modification of the item's working spec as assigned by the user.

But what about conformance, especially regarding the meaning of a Certificate of Conformance ? Typically available when an instrument is purchased, it is now generally recognized that such a document has little value as an assurance of product performance. Of course, the manufacturer expects that the product conforms to its spec but, in this sense, the document simply affirms that the customer's purchase order/contract requirement has been duly fulfilled.

Thursday, June 26, 2008

Uncertainty Myths

MYTH TRUTH
—Uncertainty—
ISO17025 requires that measured values and measurement uncertainty is reported on a certificate. This is true if the certificate does not include a statement concerning the equipment's compliance to a stated specification. In this case, section 5-10-4 says that the results and uncertainty must be maintained by the lab.
We need to determine our own measurement uncertainty so need to know the calibration lab's uncertainty.

If the calibration confirmed that the instrument met the manufacturer's specification, the effect of uncertainty on that status decision has already been taken into account (as required by ISO17025, para.5-10-4-2). In this case, the user's own uncertainty budget starts with the product specification and the calibration uncertainty is not included again.

If the calibrated item does not have a specification (i.e. the certificate provides only measured values) then the cal lab's uncertainty will need to be included in the user's own uncertainty analysis.

The need to know "uncertainty" is new. We've been certified against ISO9001:1994 for years and have never been asked before.

You've just been lucky or were satisfactorily meeting the requirement without realizing it !

Look again at clause 4-11-1; it clearly states that "...equipment shall be used in a manner which ensures that the measurement uncertainty is known and is consistent with the required measurement capability."

For the majority of instrument users, the requirement is readily satisfied by referring to the equipment specifications. In general terms, the specification is the user's uncertainty.

The uncertainties that an accredited lab will report on a certificate are published in their Scope/Schedule. The published capability represents the best (smallest possible) measurement uncertainties, perhaps applicable to particular characteristics and types of tested equipment. It's very unlikely that those figures would be assigned to all calibrations made assuming a wide variety of models are seen. Until measurements are made, it may not be possible for the cal lab to estimate the uncertainty that will be assigned because the unit-under-test contributes to the uncertainty.

Published "best measurement uncertainty" can never be achieved because it assumes an ideal unit-under-test.

In the past there have been different practices allowed by the various conformity assessment schemes. However, the European co-operation for Accreditation publication EA-4/02 (refer to Uncertainty Resources in this Basics section) recognizes that harmonization was required and, in Appendix A, establishes definitions.

This means that, certainly within Europe, best measurement uncertainty (BMC) must include contributions associated with the normal characteristics of equipment they expect to calibrate. For example, it's not acceptable to base the uncertainty of an attenuation measurement on a device having an assumed perfect match. Some BMC's are qualified with the phrase "nearly ideal" regarding the test item but this means that the capability does not depend upon the item's characteristics and that such perfect items are available and routinely seen by the lab.

Calibrations without uncertainty are not traceable. It is true that the internationally agreed definition of traceability includes a need for the uncertainty of the comparisons to be stated. However, it doesn't mean that a calibration certificate must include uncertainty (or measured values), as is allowed by ISO17025 and other standards if a specification compliance statement is used, although this information must be maintained by the lab.
By using a correction based on the instrument's error as determined by calibration, the working specification can be tightened. This effectively minimizes the user's own measurement uncertainty to that of the calibrating lab.

The equipment manufacturer specifications cannot be ignored. For instance, they include allowances for drift over time and environmental conditions. In contrast, the calibration represents a performance assessment at that time and in particular conditions. Yet the myth dangerously assumes that the "error" is constant despite these variables.

Calibration Myths

MYTH TRUTH
—Calibration—

A Certificate of Calibration means that the instrument met its specification, at least when it was tested.

ALSO

Calibration means that the equipment was adjusted back to nominal.

Whether this is correct or not depends on the calibration laboratory's service definitions or what was agreed between the supplier and customer. The international meaning of "calibration" does not require that errors detected by the measurement comparison process are corrected. It means that adjustment to return an item to specification compliance may, or may not, be performed.

Unless the Certificate contains a statement affirming that the item met the published specification it is merely a report of the measurements made. In this case it is left to the equipment user to review the data against requirements. The equipment may have been found and returned to the user out-of-tolerance!

Some equipment is more expensive to have calibrated than to purchase a new one each year. Just scrap the old item which was probably worn anyway.

The first part of this assertion is TRUE but....
It could be that a calibration certificate is not provided with the new purchase. Some users are not concerned, perhaps relying upon the manufacturer's reputation to deliver new products that are specification-compliant which may be a justifiable risk.

Less justifiable is the suggested practice to dispose of the old item without first getting it calibrated. How would you know if it had been used in an out-of-tolerance condition? If it had been out-of-spec, would it affect the integrity or quality of the process or end-product? If so, the proposal is a false economy !

Only measuring equipment with the possibility of adjustment needs periodic calibration. As an example, liquid-in-glass thermometers only need certification when first put into service; they either work or are broken.

Just because an item is not adjustable doesn't mean that it's perfectly stable. Some standards may be subject to wear which changes their value (e.g. a gauge block) or they may be invisibly damaged leading to non-linear or odd behavior (e.g. a cracked glass thermometer).

Or the material from which they are constructed may also not be stable. For example a quartz crystal oscillator changes its resonant frequency because mechanical stress in the crystalline structure is released over time.

If an item needs routine calibration, the manufacturer states what is necessary in the equipment's handbook; otherwise calibration isn't required.

It is true that some manufacturers provide such advice (Agilent Service Manuals spring to mind ! ). But many, typically smaller, companies do not make this investment. It's unsafe to make the assumption that no advice means no calibration.

Also be aware that industry practices change over time and a manufacturer's recommendations as published thirty years ago may not be as metrologically rigorous as those produced to match today's market expectations.

The original manufacturer or the calibration lab defines the appropriate calibration interval for the product or item. The user is bound by that periodicity.

It's often unrecognized that a product's specification is generally linked to a time period. Simplistically, the manufacturer may establish the specification having assessed the accuracy and drift of prototype units. It may well be statistically justified for a particular confidence level that a certain percentage of the product population (all those produced) are likely to still comply with the spec after the stated period. Whatever the mechanism used, the calibration interval is only a recommendation.

Some cal labs offer a service to manage the periodicity of customers' equipment based on the accumulated cal history. Otherwise, this risk management responsibility remains with the user.

Safety regulations stipulate the legal maximum period allowed between cals to be one year.

The problem with such a policy is that it may be implemented differently to what is intended. Maybe all items will be assigned a one year interval without any regard for its justification or applicability to the use of a particular piece of equipment?

The assignment of a suitable interval should be recognized as part of an equipment user's risk management strategy. One must consider the knock-on effects if the item is later found to have been used in an out-of-tolerance condition (e.g. product recall costs). So, there's a balance to be achieved between the inconvenience and cost of excessive calibration and impact of unreliable kit.

In safety-critical applications any degree of risk may be unacceptable but this would probably be implemented by parallel and back-up systems. Total reliance upon a single piece of equipment, even if tested every day, would be unusual.

Standards Myths



MYTH TRUTH
—Standards—
ISO17025 states that it's equivalent to ISO9000 so ISO9000 must be equivalent to ISO17025.

ISO17025 does indeed state, in its Introduction and in paragraph 1-6, that compliance with the standard means that the laboratory's quality system for their calibration or testing activities also meets the criteria of ISO9001/2. Two points to emphasize though:

  1. The activities of many service providers extends beyond just calibration or testing (e.g. repair, supply of parts, training, etc.) where 17025 does not apply.
  2. The equivalence is to the 1994 version of the ISO9000 standards which was superseded in late 2000.
My factory's quality system complies with ISO9000 so all my equipment must be calibrated "Before & After"adjustment.

A calibration service that provides assessment of the product's performance on-receipt and, if necessary, after adjustment or repair has been completed has two purposes.

  1. It enables analysis of the equipment's stability over time.
  2. More significantly, if the on-receipt performance did not meet the user's accuracy requirements, an investigation of its impact can be triggered that may result in product or work recall.

These possibilities need only apply to equipment affecting the quality of the factory's product or service, for example that used for alignment or end-of-line inspection. Understanding the distinction can save a lot of money !

Accreditation agencies define the extent of testing for various products so that users can have confidence in their equipment's overall performance.

In some countries there are national and regulatory standards that are applicable to some measuring equipment. These usually relate to legal metrology (i.e. measurements made in the course of consumer trade) or statutory codes (e.g. safety) or certain sectors of industry.

However, accreditation bodies do not stipulate that these must be used although labs would generally do so where applicable. Also, there are no standards concerning the typical general purpose instruments that may be used in the electronics industry, for example.

Although accreditation criteria includes a need for
calibration certificates to draw attention to limitations in the scope of testing performed versus the product's capability, it is left to the client and supplier to agree the content of the service. Whether the calibration utilizes any recommendations of the equipment's manufacturer is part of this negotiation.

My calibration supplier is ISO17025 accredited so all the calibrations they undertake meet that standard.

The results of a calibration performed under the scope of the accreditation are reported on a certificate bearing the authorized brand-mark of the accreditation program.
For commercial reasons, most accredited laboratories offer at least two calibration service levels -- a certificate with the accreditation logo or a company-proprietary
certificate.

The processes used to undertake the calibration and the extent of testing may be the same in both cases, or may differ. Some accreditation programs allow the inclusion of (a minority of) measurements which are not within the lab's accredited capability, providing they are clearly identified as non-accredited.

Results which are simply reported as "Pass" or "Fail" are not acceptable.

Recording of numerical measurement data is not relevant for some tests. This may be because it's of the "go, no go" type (e.g. checking a bore using a plug gauge) or because the test procedure establishes known test conditions and looks for satisfactory response in the unit-under-test (e.g. checking input sensitivity of a frequency counter by applying a signal whose amplitude equals the specified sensitivity and noting whether stable triggering is observed).

To summarize, pass/fail is valid where the decision criteria is defined (i.e. specification limits).

A supplier that has an ISO9000 certificate is good enough. This may be reasonable but questions concerning the scope of the certification should be asked. If the quality system that was assessed related to a company's pressure sensor manufacturing operation in Chicago, how much assurance does that endow on micrometer service at their Dallas repair office? Possibly none! The scope of registration is explicit in coverage.
Only accredited calibrations are traceable to national standards.

Traceable measurements are those supported by records that can demonstrate an unbroken series of calibrations or comparisons against successive standards of increasing accuracy (reducing uncertainty) culminating in a recognized national metrology institute.

Measurement traceability is, of course, also reviewed as part of an ISO9000 quality system certification.

My own testing laboratory is accredited against ISO17025 so our instruments must be calibrated at an accredited lab.

This may depend upon the interpretation of the standard by the particular accreditation body. Clause 5-6-2-1-1 of ISO17025 does not actually stipulate that traceability must only be obtained from an accredited facility only that the supplier "can demonstrate competence, measurement capability and traceability".

The British accreditation agency has confirmed that it will not add supplementary requirements to the 17025 criteria. It also accepts the possibility of traceability to a non-accredited source provided that sufficient evidence is available to UKAS to confirm that the supplier complies with the standard and that the lab being audited by UKAS has the critical technical competence to make such an assessment.

Uncertainty Made Easy

About this Article
This paper by Ian Instone was first presented at the Institution of Electrical Engineers, London in October 1996 at their Colloquium entitled "Uncertainties made easy".
Note: Details of the current versions of recommended uncertainty guidance publications can be found on the Uncertainty Resources page in this section.


Simplified Method for Assessing Uncertainties in a Commercial, Production Environment

Introduction

With the introduction of Edition 8 of NAMAS document NIS3003 (1) and the inclusion of the principles outlined in the ISO Guide to the Expression of Uncertainty in Measurement (2) the assessment of uncertainties of measurement has become a task more suited to a mathematician rather than the average calibration engineer. In some companies with small calibration departments it might be possible for all of the engineers to be re-educated in assessment of uncertainties, however, in larger laboratories it is more usual for various engineers to become specialist in certain aspects of the calibration process. This paper aims to demonstrate a simplified approach to uncertainty assessment which falls broadly within the guidelines set out in both NIS3003 and the ISO Guide.

One of the first stumbling blocks in NIS3003 is the necessity to derive a measurement equation. Whilst it is agreed that this is a useful skill which might demonstrate a more thorough understanding of the measurement principles, it seems only to serve as an additional step in the uncertainty assessment process, steps which were not thought necessary in previous 7 editions of NIS3003. The next step, deriving sensitivity coefficients by the use of partial differentiation will cause most calibration engineers to reach for the mathematics text book. Fortunately, in many cases these two steps can be replaced using a more practical approach. A list of contributions to the uncertainty budget can be used in place of the measurement equation and each term may be partially differentiated by varying the quantity over its range and measuring its influence on the measurand. For instance, it may have been determined that temperature variations have an influence upon the quantity being measured then, rather than produce a measurement equation which includes temperature and partially differentiate it, one can simply perform the measurement, change the temperature by the specified amount and re-measure. The resultant change in the measurand becomes a contribution to the uncertainty budget. There are also cases where the same approach may be used but where there may be no necessity to perform the measurements to obtain the data. For instance, many resistors have a temperature coefficient specification, in the form of ±N parts per million per degree Celsius. Assuming the temperature is controlled to within ±2°C the change in the value of the resistor due to temperature fluctuations will be given by:-

N parts per million x 2°C

Most contributions to an uncertainty budget can be assessed using either method. The practical method described will often yield smaller values because they are based on measurements performed on only a small quantity of items, where as the latter method is based upon the equipment specification which should cover the entire population of that instrument and so will normally produce larger contributions to the uncertainty budget.

Type-A Uncertainties

In a commercial calibration laboratory often it is not economical to perform several sets of measurements on a particular instrument solely to produce a value for the random (Type-A) uncertainty contribution. The alternative method shown in NIS3003 is preferred and usually employed where possible. In cases where multiple measurements are performed it is usual practice to calculate the standard deviation of the population. The estimated standard deviation of the uncorrected mean of the measurand is then calculated using:

Esd = Psd / sqrt(N)

Where:
Esd is the estimated standard deviation of the uncorrected mean of the measurand
Psd is the standard deviation of the population of values
N is the quantity of repeated measurements

When the quantity of measurements to be performed on the equipment being calibrated is limited to one set of measurements then N in the equation above will be 1. The standard deviation of the population Psd will previously have been determined from an earlier Type-A evaluation based upon a large number of repeated measurements. In an ideal world the measurements would be repeated on several instruments within the family and the worst case standard deviation used in the Type-A assessment. In practice however, providing the assessment techniques outlined in this paper are employed, the Type-A contribution to the uncertainty budget can often be shown to be negligible so the need to make a very large number of repeated measurements is reduced.

In the ideal world where customers are willing to pay unlimited amounts of money for their calibrations, or where we have very large quantities of similar instruments to calibrate it is a fairly simple matter to measure several instruments many times and obtain a good reliable estimate for the standard deviation. In reality, customers have limited budgets and calibration laboratories rarely have even small quantities of particular instruments which can be used for extensive testing to provide a good and reliable estimate of the standard deviation. Another simpler, and more cost effective method is required.

Before embarking upon the assessment of uncertainties we need to understand exactly what our customer is expecting of their calibration report and what use they will make of it. For the majority of simple reference standards such as resistors, standard cells, capacitors etc. it is likely that the measured values will be used by the customer so an uncertainty assessment as defined by NIS3003 will be required. For the great majority of instruments it is often not possible to make any use of the values obtained during its calibration so it is usually only necessary to provide a calibration which demonstrates that the instrument is operating within its specifications. In these cases it is usually not necessary to provide measurements with the lowest measurement uncertainties, which allows some compromises to be made.

ISO 10012-1 (3) suggests that we should aim for an accuracy ratio between uncertainty and the instrument being calibrated of greater than 3:1. The American interpretation of ISO Guide 25 (4), ANSI Z540-1 (5) suggests that uncertainties become significant when the accuracy ratio is less than 4:1. If we assume that the instrument specification has the same coverage factor as the uncertainty the following expression would describe the resultant combination of the uncertainty and specification which should be used when the instrument is used to make measurements:

§ = sqrt [ S 2 + U 2 ]

Where:
§ is the resultant expanded specification resulting from the calibration
S is the specification of the parameter being measured
U is the uncertainty of measurement when performing the calibration

In the cases where S >= 4U the effect of the uncertainty upon the specification is shown to be negligible, for instance assume that S = 8 and U = 2 then:

§ = sqrt [ 8 2 + 2 2 ]
= sqrt [ 64 + 4 ]
= sqrt [ 68 ]
= 8.25

Therefore, with an accuracy ratio of 4:1 the effective specification expands by 3.1%. As most uncertainties are only quoted using two figures it is unlikely that this small increase would have any effect. Repeating the same with an accuracy ratio of 3:1 produces an increase of only 5.2%.

The same analogy can be used when assessing the significance of a particular uncertainty contribution. Type-A uncertainties are those assessed using statistical methods usually based on many sets of measurements, thereby making them the most expensive to assess. Using the model above we can show that Type-A uncertainties are insignificant when they are less than 30% of the magnitude of the Type-B uncertainties:

Total Uncertainty = Type-B uncertainties where Type-A <>

and:

Effective Specification = Specification where Total Uncert <>

From above we can show that Type-A uncertainties can be regarded as insignificant when they are less than 0.09 of the specification being tested, or in approximate terms Type-A uncertainties can be regarded as negligible when they are less than 10% of the specification.

Verifying that an uncertainty contribution is less than a given value is usually much easier than assessing the precise magnitude of it. One method described in an earlier paper (6) normally requires only two complete sets of measurements to be made on the same instrument. One set of measurements are then subtracted, one measurement at a time from the other set. The largest difference is then assumed to be a conservative estimate of the Type-A uncertainty contribution. This technique has been verified many times against uncertainties assessed in the traditional way and has always produced an acceptable conservative estimate of the Type-A contribution, providing that an adequate quantity of measurements are compared across the range. Assuming that the comparison produces no values that are outside the limits defined earlier (10% of the DUT specification or 30% of the Type-B uncertainty estimate) it can be assumed that the Type-A uncertainties are not significant. To provide good confidence and consistency in the assessment process the value defined as insignificant should always be included in the assessment.

It is also possible to use values for the Type-A assessment gained from other, related instruments providing some knowledge of the construction of the instrument under test is available. For instance, it may be that a laboratory has already assessed a certain 50MHz to 18GHz signal generator and verified that the Type-A uncertainty contribution meets the criteria outlined above. A 12GHz signal generator from the same family is then submitted for assessment. In this case, providing the two signal generators share similar designs, and use similar hardware and layouts, and the same test methods and equipment are used it would be reasonable to employ the 18GHz Type-A assessment on both generators. In other cases it might be possible to refer to published data for certain Type-A contributions.

In cases where these techniques reveal that the Type-A contributions are significant (as defined above) the uncertainty assessment should be performed in the usual way using many repeated measurements.

Sensitivity Coefficient

In most cases sensitivity coefficients can be assumed to be 1. However there are some notable exceptions where other values will be used. One of these relates to the measurement of resolution bandwidth on a spectrum analyzer. In this case we have measurement uncertainties expressed in two different units; measurements of amplitude are expressed as an amplitude ratio (usually in dB units) and measurements of frequency (in Hz.). The bandwidth measurement is often performed by applying a "pure" signal to the analyzer's input and setting the controls so that the signal shown below is visible. The envelope describes the shape of the filter and normally we would measure the 3dB or 30% (below the reference) point of it (shown on the left of the figure below). To assess the sensitivity coefficient we need to determine the gradient of the graph at the measurement point. Spectrum analyzers often have an amplitude specification of 0.1dB per 1dB, therefore the amplitude uncertainty at 3dB will be ±0.3dB or ±7%. We then move ±7% from the 70% point and read off the resultant change in frequency.

The resultant change in frequency due to amplitude uncertainty is: ±3.8 frequency units. Since this value has been found for an amplitude specification of ±0.3dB it will have a sensitivity coefficient of 1.

Fig.1 -- Bandwidth measurement (determining the sensitivity coefficient)

On the right of the figure is a similar construction for assessing the frequency uncertainty due to the amplitude uncertainty when the 6dB (50%) point is measured. In this case the amplitude uncertainty increases to ±0.6dB (0.1x6). As a linear ratio this equates to ±13%.

Reading from the graph this represents a frequency uncertainty of ±6 frequency units.

Assessing the uncertainty contributions in this way greatly reduces the possibility of errors as might occur if following the theory using partial differentiation. In addition a practical technique such as this is preferred by most calibration engineers.

Other empirical means of obtaining values for the uncertainty budget may also be employed. For instance it might be possible to establish a value for temperature coefficient by changing the environmental temperature by a few degrees. In this case we could derive a sensitivity coefficient for the output signal in terms of temperature change.

Total Uncertainty Budget

One of the principle benefits of the latest revision of NIS3003 is the strong suggestion that all of the uncertainty contributions should be listed in a table along with their probability distribution. Whilst at first sight this seems a tedious task, it pays dividends in the future because it makes the contributors to the budget absolutely clear. The table below shows a typical example of an uncertainty assessment for a microwave power measurement at 18GHz using a thermocouple power sensor. These types of power sensor measure power levels relative to a known power so a 1mW, 50MHz power reference is included on the power meter for this purpose. In most cases it is simpler and more correct to use a measuring instruments specification rather than try to apply corrections and assess the resultant uncertainty. For the majority of measurements it is not possible to make corrections based upon a calibration report as that report only indicates the instruments calibration status at the time it was measured and only when operated in that particular mode described on the certificate. It is not possible to predict the errors at any other points.

Symbol

Source of Uncertainty

Value
± %

Probability Distribution

Divisor

Ci

Ui
± %

K

Calibration factor at 18 GHz

2.5

normal

2

1

1.25

D

Drift since last calibration

0.5

rectangular

sqrt(3)

1

0.29

I

Instrumentation Uncertainty

0.5

normal

2

1

0.25

R

50 MHz Reference spec.

1.2

rectangular

sqrt(3)

1

0.69

Mismatch loss uncertainties:

M1

Sensor to 50 MHz Reference

0.2

U-shaped

sqrt(2)

1

0.14

M2

Sensor to 18 GHz Generator

5.9

U-shaped

sqrt(2)

1

4.17


A

Type-A Uncertainties

2.1

normal

2

1

1.05








UC

Combined Standard Uncertainty


normal



4.55

U

Expanded Uncertainty


normal (k=2)



9.10

Where:

Ci is the sensitivity coefficient used to multiply the input quantities to express them in terms of the output quantity.

Ui is the standard uncertainty resulting from the input quantity.

The standard uncertainties are combined using the usual root-sum-squares method and then multiplied be the appropriate coverage factor (in this case k=2). In some cases it will be appropriate to use a different coverage factor, perhaps when a 95% confidence level is not adequate, or sometimes when the input quantities are shown to be "unreliable". The Vi (degrees of freedom of the standard uncertainty) or Veff (effective degrees of freedom) column has not been included in the table above in order to simplify the assessment process.

Degrees of Freedom

Degrees of freedom is a term used to indicate confidence in the quality of the estimate of a particular input quantity to the uncertainty budget. For the majority of calibrations performed under controlled conditions there will be no need to consider degrees of freedom and a coverage factor of k=2 will be used. In cases where the Type-A uncertainty has been assessed using very few measurements a different coverage factor, using the degrees of freedom, would normally be calculated. However, whilst the assessment method proposed in this paper is based on only two sets of measurements being performed experimental data confirms that this treatment (taking the worst case difference) produces a reliable, conservative estimate of the Type-A uncertainties. In most cases the degrees of freedom can be assumed to be infinite and the evaluation of the t factor using the Welch-Satterwaite equation would not be necessary. NIS3003 provides some guidance on using these methods but does stress that normally it is not necessary to employ them.

Conclusion

The uncertainty assessment method described in this paper have been employed at Hewlett-Packard's UK Service Center for several years. External, internal and informal measurement audits have in every case provided confirmation that the uncertainties are being estimated with the expected level of confidence. This simplified approach is easier to understand and use which enables more calibration engineers to contribute fully to the uncertainty assessments.

References
  1. The expression of Uncertainty and Confidence in Measurement for Calibrations, NIS3003 Edition 8 May 1995.
  2. Guide to the Expression of Uncertainty in Measurement. BIPM, IEC, IFCC, ISO, IUPAC, OIML. International Organization for Standardization. ISBN 92-67-10188-9. BSI Equivalent: "Vocabulary for Metrology, Part 3. Guide to the Expression of Uncertainty in Measurement", BSI PD 6461: 1995.
  3. Quality assurance requirements for measuring equipment, Part 1. Metrological confirmation system for measuring equipment, ISO 10012-1:1992.
  4. Calibration Laboratories and Measuring and Test Equipment - General Requirements ANSI/NCSL Z540-1-1994.
  5. General requirements for the competence of calibration and testing laboratories, International Organization for Standardization, ISO Guide 25:1990.
  6. Calculating the Uncertainty of a Single Measurement from IEE Colloquium on "Uncertainties in Electrical Measurements", 11 May 1993, Author Ian Instone, Hewlett-Packard.

Measuring Language

Metrologists spend so much time numerically quantifying physical phenomena, that the opportunity to consider the language used to actually quantify may be a welcome diversion. We start by assigning values to some comparative terms.

But how many is some ? Perhaps six or seven ? Well, it's probably more than several, so let us assume that several is four or five. And how many is a few ? Most consider it to be less than several and therefore certainly less than some. But it's more than two, since two is definitely a couple. By these terms a few must be three or four.

Dictionary

Reference to a handy Oxford English Dictionary reveals that some is "an appreciable or considerable number". Surprising since, conversely, sometimes isn't generally felt to be very often. Indeed, the OED defines the frequency of sometimes as "at one time or other". Seemingly, some has a serious lack of stability, having the duality of being both a large and small quantity at once. Given this, you'd need to be quite an optimist to ask for some apple pie.

Which leads us to wonder about that quite qualifier. Quite, when relating to a lot (many) diminishes the lot; quite a lot clearly being less than a lot. Similarly, quite big is smaller than simply big and also, quite good being rather poorer than good.

However, quite when used to qualify virtue, increases the degree of trueness; quite correct being more right than just correct. Likewise, probably is more probable when it is quite probably. And on the subject of confidence, just right attributes a higher degree of perfection than something that is only right. By reversing the phrase and with only an additional pause, as in "right... just", it's possible to convey a sense of barely satisfying the requirement.

A more interesting observation concerns opposites which we came upon quite by chance and which is, evidently, more extraordinary than doing so by chance. Consider valid. Quite valid is marginally less valid than valid but quite invalid is far more invalid than invalid. At the same time, quite true is truer than true; quite untrue more untrue than untrue.

By combining the foregoing propositions we can address the question of how many is quite a few? It seems to be more than a few and, alarmingly, this may then encroach on the ground occupied by several. Since quite several is nonsensical whereas quite some is more than some (albeit colloquially for emphasis, as in "That is quite some building"), it stands to reason that several misses out a bit (a bit being less than quite a lot but more than nothing).

The entire discussion serves to illustrate the imprecision of language; it has uncertainty. But to what degree? Well, certain suggests definite (=100%) but uncertain doesn't mean impossible (>0%), so maybe tends towards 50%. If certain equates to 100% and uncertain lies in the range 30-70%, might risky reflect 5-30%? But what is something having higher confidence than uncertain but not the absolute assurance of certain? Hmmm... language guardbands are required. Some metrologists are quite certain of that, surely?

And you thought the language of measurement was difficult !

A History Lesson

IN THE BEGINNING was created the Imperial Ton
= 2240 pounds (lbs.)
= 20 Hundredweight (cwt) i.e. 1 cwt = 112 lbs.
AND YEA, when the Pilgrim Fathers landed on Plymouth Rock, they said
VERILY: One Hundredweight should be one hundred pounds and one Ton should be 2000 lbs.
THUS WAS CREATED the US Ton.
BUT SORELY DISPLEASED were the merchants and traders when they became aware that the colonials were making 10% on the side.
THUS IT CAME TO PASS that the British traders did declare that their galleons would, in future, also use measures of 2000 lbs, and declared that this measure should be named the Short Ton.
MANY MOONS PASSED, and the tribes of Europe did send their high priests to council one with the other, whereupon they begat the EEC (EU).
THE TRIBES OF THE CONTINENT did pour scorn upon the Ton and the Short Ton, and being more in number than the Britons did ordain that all nations should obey The New Commandment: Thou shalt worship the Tonne which equates to 1000 kilograms (kg).
THIS DID SORELY DISPLEASE THE BRITONS, since this new measure did contain 2205 lbs., but it came to pass that more tribes came to join the EEC and the Britons were obliged to pay homage to the Tonne.
THE EEC DID COMMAND that tablets of stone be carved, on which was writ:
1 IMPERIAL TON = 2240 lbs.
1 SHORT TON=1 US TON = 2000 lbs.
1 TONNE = 1000 kg = 2205 lbs.
THUS WAS THE CONFUSION CREATED.

Amen

Saturday, June 21, 2008

Factors Influencing Automation Projects

Here's a good powerpoint presentation about Factors Influencing Automation Projects. It tackles matters on DCS, PLC, SCADA, Industrial Local Area Networking, Standards and others. Greats for basic discussions!

Download Now!

Friday, June 20, 2008

Calibration Basics!

The following is a presentation from National Instrument’s Test Equipment Summit that serves as a good primer on calibration. It explains all the basic concepts and terms in respect to incorporating calibration in best practices and ensuring product quality

What is Calibration?

Definition: Calibration is the comparing of a measurement device (an unknown) against an equal or better standard. A standard in a measurement is considered the reference; it is the one in the comparison taken to be the more correct of the two. One calibrates to find out how far the unknown is from the standard.

Typical Calibration: A “typical” commercial calibration references a manufactures calibration procedure and is performed with a reference standard at least four times more accurate than the instrument under test.

Why Calibrate?
Calibration is an Insurance Policy.

Some people consider calibration a necessary annoyance to keep the auditor off their back. In fact, out of tolerance (OOT) instruments may give false information leading to unreliable product, customer dissatisfaction and increased warranty costs. In addition, OOT conditions may cause good products to fail tests, which ultimately results in unnecessary rework costs and production delays.

Calibration Terms
Common calibration terms

Out of Tolerance Conditions: If the results are outside of the instrument's performance specifications it is considered an OOT (Out of Tolerance) condition and will result in the need to adjust the instrument back into specification.

Optimization: Adjusting a measuring instrument to make it more accurate is NOT part of a “Typical” calibration and is frequently referred to as “Optimizing” or “Nominalizing” an instrument. (this is a common misconception) Only reputable and experienced calibration providers should be trusted to make adjustments on critical test equipment.

As Found Data: The reading of the instrument before it is adjusted.
delays.

As Left Data: The reading of the instrument after adjustment or “Same As Found” if no adjustment was made.

Without Data: Most calibration labs charge more to provide the certificate with data and will offer a “No-Data” option. In any case “As-Found” data must be provided for any OOT condition.

Limited Calibration: Sometimes certain functions of an instrument may not be needed by the user. It may be more cost effective to have a limited calibration performed (This can even include a reduced accuracy calibration).

TUR - Test Uncertainty Ratio: The ratio of the accuracy of the instrument under test compared to the accuracy of the reference standard.

ISO 9000 Calibration
ISO 9000 calibrations are crucial for many industries. The following is required for ISO 9000 Compliant Calibrations.

An Accredited Calibration Lab Performing the Work: The calibration laboratory employed to perform the calibration must be an ISO 9001:2000 accredited lab or be the original equipment manufacture.

Documented Calibration Procedures: It is critical that a valid calibration procedure be used based on the manufacture’s recommendations and covering all aspects of the instrument under test.

Trained Technicians: Proper Training must be documented for each discipline involved in performing the calibration.

Traceable Assets: The calibration provider must be able to demonstrate an unbroken chain of traceability back to NIST.

Proper Documentation: All critical aspects of the calibration must be properly documented for the certificate to be recognized by an ISO auditor.

A Comprehensive Equipment List: For any manufacture to pass an ISO audit regarding calibration they must demonstrate that they have a comprehensive equipment list with controls in place for additions, subtractions and custodianship of equipment.

Calibrated and NCR Items Properly Identified: The equipment list must identify any units that do not require calibration and controls must be in place to ensure that these units are not used in an application that will require calibration.

A Proper Recall System: A procedure should be established with timeframes for recall notification, an escalation procedure, and provisions for due-date extension.

Equipment Custodianship: responsibilities for ensuring the equipment is returned to the cal lab should be assigned and delegated.

An OOT Investigation Log: Any instrument found out of tolerance requires that an investigation be performed to determine the impact on manufacturing. Records and reports need to be maintained.

ISO/IEC 17025 Calibration
ISO/IEC 17025 Calibration: As a general rule 17025 calibrations are required by anyone supplying the automotive industry and it has also been voluntarily adopted by numerous companies in FDA regulated industries.

ISO/IEC 17025 is an international standard that assesses the technical competency of calibration laboratories. ISO/IEC 17025 covers every aspect of laboratory management, ranging from testing proficiency to record keeping and reports. It goes several steps beyond a ISO 9001:2000 certification.

A “17025” calibration is a premium option that provides additional information about the quality of each measurement made during the calibration process by individually stating the uncertainty calculation of each test point.

Calibration Intervals
How Calibration Intervals are Determined

Calibration intervals are to be determined by the instrument “owner” based on manufacture recommendations. Commercial calibration laboratories can suggest intervals but in most cases they are not familiar with the details of the instrument’s application.

The OEM intervals are typically based on parameters like mean drift rates for the various components within the instrument. However, when determining calibration intervals as an instrument “owner” several other factors should be taken into consideration such as: the required accuracy vs. the instrument’s accuracy, the impact an OOT will have on the process, and the performance history of the particular instrument in your application.

How to Implement or Improve a Calibration Program
Any successful calibration program must begin with an accurate recall list of your test, measurement and diagnostic equipment.

The recall list should contain a unique identifier which can be used to track the instrument, the location, and the instrument’s custodian (Often asset management software, bar-coding systems, and physical inventories are used to help establish accurate recall lists).
It is important when assembling a recall list that modules, plug-ins, and small handheld tools are not overlooked. Also, you may have several “home-made” measuring devices (e.g. Test Fixtures) which will also need to be captured on your equipment list for a reliable calibration program.
The next step is to identify all of the instruments on your recall list which may not require calibration due to redundancies in your testing process (A commercial calibration laboratory should be able to aid you in identifying these instruments).
After creating an accurate recall list procedures must be established for adding new instruments, removing old or disposed instruments, or making changes in instrument custodianship. Recall reports should be run with sufficient time for both the end user and the service provider to have the unit calibrated with a minimal impact on production.
A late report identifying any units about to expire or already expired will ensure 100% conformity. A full service calibration laboratory will supply these recall reports and will provide special escalation reporting when equipment is not returned for service.
(Some calibration labs offer the choice of web-based equipment management systems that allow their customer to perform recall reports, late reports and keep electronic versions of their calibration certificates.)

Avoiding Production Delays
Obtain timely equipment calibrations without shutting down a line for days.

Look for a calibration service provider that can perform onsite (or in-place) calibrations at your facility. Often when your volume is more than 20 calibrations, scheduling onsite calibration saves time and lowers cost.
Make sure you find a “one-source” calibration provider that has sufficient capabilities to calibrate nearly all of your equipment during the onsite, reducing the delays and the expense of using an additional subcontractor.
Other options for reducing downtime include mobile Calibration lab services, scheduled depot calibrations, calibrations during shutdowns, scheduled pick-up and delivery, and weekend or nightshift calibrations.

Should We Calibrate Ourselves?
Most companies discover they cannot effectively perform their own calibrations for many reasons. The most frequent issues with internal calibrations are:

Cost of standards: Often, the cost of the assets with the required accuracy to perform the calibration is prohibitive (It could take years of calibrations to pay for one standard).

Developing Procedures: Many manufacture’s procedures are not readily available. Sometimes they require research and development. This can cost hundreds of hours of labor.

Productivity of Technicians: Often a non-commercial calibration laboratory’s productivity per employee is only a fraction of what can be obtained through an external commercial calibration laboratory who specializes in automation, efficient procedures and experienced management.

Cost of Management: Managing the employees, assets, maintenance and processes of a calibration lab can be burdensome on existing management staff.

Not a core competency: The overall management burden of the operation distracts from the core competency of the company.

Wednesday, June 18, 2008

HOW TO RECRUIT THE RIGHT PERSON FOR THE JOB?

Put about 100 bricks in some
Particular order in a closed
Room with an Open window.

Then send 2 or 3 candidates in
The room and close the door.

Leave them alone and come back
After 6 hours and then analyze
The situation.

If they are counting the Bricks.
Put them in the accounts Department.

If they are recounting them..
Put them in auditing.

If they have messed up the
Whole place with the bricks.
Put them in engineering.

If they are arranging the
Bricks in some strange order.
Put them in planning.

If they are throwing the
Bricks at each other.
Put them in operations.

If they are sleeping.
Put them in security.

If they have broken the bricks Into pieces.
Put them in information Technology.

If they are sitting idle.
Put them in human resources.

If they say they have tried Different combinations, yet
Not a brick has Been moved. Put them in sales.

If they have already left for The day.
Put them in marketing.

If they are staring out of the Window.
Put them on strategic Planning.

And then last but not least.
If they are talking to each
Other and not a single brick
Has been Moved.

Congratulate them and put them
In top management

Sunday, June 15, 2008

The Supervisor

Any one working for any industry who started in a low ranking position have experienced under the management of its supervisor/s. These people are one of the key personnel in the life of its subordinates' career path. They are the one who unconsciously mold the thinking and actions of the people under him or her. They are the link between the upper management and the rank and file workers of a said organization. The supervisor can upgrade and inspire a rank and file personnel in terms of working attitude and views towards life or degrade it and demoralize that person or even still would pose as a nothing and stagnant to the working environment.

There are different kinds of supervisor that I have encountered now and in the past several years as a member of the working industry. The following are the different supervisor and their relationship towards its subordinates.

The Technical Supervisors - This type utilizes their skills particularly in the field that they manage in view of his team. These supervisors possesses skills and knowledge more than his or her men acquired abilities. He can do more almost most of the time what his subordinates can and can effortlessly explain technical theories regarding work. He leads by setting himself as an example. He can do work as part of the working team and rely mainly on his team somewhat for their manpower. This type of supervisor can inspire his team. He is very-well known towards other technical guys and operators but not so in the higher management for he is usually always on the work and can be barely make detailed reports with his name on it. This is one indispensable guy in terms of industry's operations.

The Signatory Supervisors - Yes, that's what they do. They just sign documents. These people are the major demoralizers of the department they are in to. These type of supervisors have lots of 'friends' from the upper management because they do it on paper. And paper can last for several years. They are almost always untechnical and rely mostly on the output of its subordinates. They cannot withstand being alone in the work area for fear that a job order might come in an emergency status and he is the only there to attend work to and has no-how of the job. These type of supervisors usually get the merit for their men's output. Are you one of them?
The Seminarian - Yeah, you heard it right. This one attends seminars for most of the time of his work schedule. This type is barely in his office and is always attending conferences and seminars either on technical or work enhancement programs. He is 'securing' his future if you know what I mean. The workers on the other hand are now practicing "Can work without supervision". It's up to the supervisor to apply what he learned during those times.

The Call Center Agent - This person answers calls and almost most of the time! He is always by the phone and be the first one to grab it when it rings. He stays by his table and just chatter away the time. I wonder what in the phone?

The Be One of Us - This guy has strategy. And his strategy could be his rise or fall. He is willing to be with his guys through thick and not so thin. He tolerates mistakes and allow it within his premises for fear that his men would go on sit down strike. What I mean is that type of supervisor would allow his men to do anything as long as they do their work very well when ordered to and be not caught by the higher management. Once one of his would get caught though, he always comes out clean and would never know anything about it once questioned. Do you know some who is like this? Let me know!

I am In-Command, do you HEAR? - This guy shows power! He can shout and point a finger at you for a job not well done. His men are always in fear that he might report something to the higher management which can either put disciplinary actions toward someone. He cannot expect to put to action the full potential of his team though. This usually happens when the age gap between the supervisor and its rank and file personnel are a bit wide.

These are the types of supervisors that I have encountered with yet either directly or indirectly. Each has its own advantage and disadvantage. One can be a mix of two or three or four of these qualities. Do you have other kinds as well? Or are you one of them? Smile...