Friday, August 22, 2008

Standards Certification... The Need... The Lies...

Almost everyone make a lie or two in their lifetime. Even at certain times that you aren't conscious of it, and before you even know it, you are already lying to to someone. But I have recently seen the ultimate liars (aside from our government) right before my very eyes.

I am part of that agenda actually though not directly. If you are wondering what the heck I am talking about here, continue to read through this article. Have you heard of TPM or ISO even? The so-called Standards Certification thing?

When the company management starts to say something like:

"Hey, we will be applying for this international standards thing. We need your help. When someone from external auditing team for this ISO or TPM or JIPM team would ask you something blah blah blah, you should say blah blah blah. Don't ever say that we are blah this and blah that. Help the company, it will be for everyone here."

You are part of the lying thing. When the company does not really follow the Company Corporate Policy and it is just a mere sort of poster there in your office wall, then all of you are lying. It is the cheat and lying fever.

Fixed the major non-conformance temporarily while this auditors would do their thing next day or week or month. Be prepared to lie, I mean be prepared to answer what you have studied for that event. Even if just for that day. You are greatly needed.

You need to lie to be one of the Heroes. hehehehe.

Wednesday, July 30, 2008

Risk taking in the field... Is it worth it?

I have known some risk-takers in the field of industrial technology. It usually happens during troubleshooting or system modification and I admit that in some in instances, i have been one of those risk-takers. But i tell, i usually do that when i have no experience in a particular area of the industry but got a backing up theory learned somewhere while reading technicals and adding all industrial concepts related to that situation. Gladly to tell anyone reading this article that i have been quite successful most of the time. It is some sort of a first and final thing to do in a real life situation where less time spent and analysis applied equals money saved by the company and your increase in ranking in terms of the view from others as what you are capable of.
There are sureballers that i knew of that wouldn't contribute even just a little action towards a certain industrial activity because they have no specific experience or training on that certain equipment or system. They need to be sure on what they are doing and are afraid to make and accept mistakes if ever that would happen. In short, they lacked confidence and refuse to let their minds do their work, and sad to say that I know some people who are like that personally. They cannot share anything out of the box. They need to have stored knowledge of a certain thing to be able to take part confidently.
In the field of industrial instrumentation, where I currently venturing into, the fact that most if not all are just following the basic principles of sensorics and signal manipulation and its just the brands, models, and data syntaxing and commands are what that is different. If you know basic industrial instrumentation principles, then there is no need to be afraid of what is new.You got me?

Wednesday, July 16, 2008

4-20 mA Current Loop Primer

This application note’s primary goal is to provide an easy-tounderstand primer for users who are not familiar with 4-20mA current-loops and their applications. Some of the many topics discussed include: why, and where, 4-20mA current loops are used; the functions of the four components found in a typical application; the electrical terminology and basic theory needed to understand current loop operation. Users looking for product-specific information and/or typical wiring diagrams for DATEL’s 4-20mA loop- and locallypowered process monitors are referred to DMS Application Note 21, titled “Transmitter Types and Loop Configurations.”

Despite the fact that the currents (4-20mA) and voltages (+12 to +24V) present in a typical current loop application are relatively low, please keep in mind that all local and national wiring codes, along with any applicable safety regulations, must be observed. Also, this application note is intended to be used as a supplement to all pertinent equipment-manufacturers’ published data sheets, including the sensor/transducer, the transmitter, the loop power supply, and the display instrumentation.

Read more...

Download now at this page Download Section below!

Saturday, July 12, 2008

Lessons in Electric Circuit Ebook Download

Just added a new ebook free at the download section entitled Lessons in Electric Circuits Reference. From useful equations to conversion factors to various trigonometric and calculus references to circuit codes and symbols, all are available here in this awesome free ebook.

With 161 pages of pure electrical references and information, you could ask nothing more in this field.

In portable document format (format) for compatibility and ease of printing to a hard copy.

Download now at this page's download section!

Thursday, July 10, 2008

What are Pressure Transmitters

Pressure transducers are devices that convert the mechanical force of applied pressure into electrical energy. This electrical energy becomes a signal output that is linear and proportional to the applied pressure. Pressure transducers are very similar to pressure sensors and transmitters. In fact, transducers and transmitters are nearly synonymous. The difference between them is the kind of electrical signal each sends. A transducer sends a signal in volts (V) or millivolt per volt (mV/V), and a transmitter sends signals in milliamps (mA).

Both transmitters and transducers convert energy from one form to another and give an output signal. This signal goes to any device that interprets and uses it to display, record or alter the pressure in the system. These receiving devices include computers, digital panel meters, chart recorders and programmable logic controllers. There are a wide variety of industries that use pressure transducers and transmitters for various applications. These include, but are not limited to, medical, air flow management, factory automation, HVAC and refrigeration, compressors and hydraulics, aerospace and automotive.

There are important things to consider when deciding what kind of pressure transducer to choose. The first consideration is the kind of connector needed to physically connect the transducer to a system. There are many kinds of connectors for different uses, including bulletnose and submersible connectors, which have unique applications. Another important part is the internal circuitry of the transducer unit, which is housed by a "can" that provides protection and isolates the electronics. This can be made of stainless steel or a blend of composite materials and stainless steel. The various degrees of protection extend from nearly no protection (an open circuit board) to a can that is completely submersible in water. Other kinds of enclosures safeguard the unit in hazardous areas from explosions and other dangers.

The next thing to consider is the sensor, which is the actual component that does the work of converting the physical energy to electrical energy. The component that alters the signal from the sensor and makes it suitable for output is called the signal conditioning circuitry. The internal circuitry must be resistant to harmful external energy like radio frequency interference, electromagnetic interference and electrostatic discharge. These kinds of interferences can cause incorrect readings, and are generally to be avoided when doing readings. Overall, pressure transducers are well-performing and high-accuracy devices that make life easier for many industries.

Wednesday, July 09, 2008

New Download Section...

Hi!

Just recently added a download section of this site for free items related to Industrial Technology. I will be offering free ebooks and references as well as applications and other software. You can also request items by either contacting me through the chat box or through my email for more privacy. I am a research junkie and had already at my hand hundreds of softwares and ebooks related to this field. Enjoy!

d4rkhowl

What is OPC Server Development?

An OPC Sever is a software application that acts as an API (Application Programming Interface) or protocol converter. An OPC Server will connect to a device such as a PLC, DCS, RTU, etc or a data source such as a database, HMI, etc and translate the data into a standard-based OPC format. OPC compliant applications such as an HMI, historian, spreadsheet, trending application, etc can connect to the OPC Server and use it to read and write device data. An OPC Server is analogous to the roll a printer driver plays to enable a computer to communicate with an ink jet printer. An OPC Server is based on a Server/Client architecture.

There are many OPC Server Development toolkits available for developing your own OPC Server; MatrikonOPC's Rapid OPC Creation Kit (ROCKit) is one of it and enables quick OPC Server development. ROCKit offers a flexible and affordable solution that enables programmers to fully control their own product.

OPC ROCKit packages the complete OPC interface into a single DLL, eliminating the need to learn the complexities of Microsoft COM, DCOM or ATL. A developer simply writes the communication protocol routines for the underlying device and ROCKit takes care of the OPC issues.

Features include:

- Fully compliant with OPC DA 1.0a, 2.05 and 3.0 specifications.
- Free threading model on Windows NT, 2000 and XP platforms.
- Supports self-registration, browsing, data quality reporting, and timestamps.
- Can be used as a stand-alone server or as a service.
- In-proc server design for high-performance communication.
- Sample application code and comprehensive documentation illustrating how to use the ROCKit.
- OPC Explorer client that exercises the OPC COM interface for testing and debugging your server.
- The interface to the Device Specific Plug-in application code is separate from the OPC COM interface code. This means that future OPC source code updates are simply plugged in, while your own protocol code remains untouched, resulting in minimal engineering effort.

OPC (OLE for Process Control) Overview

OPC is a series of standards specifications. The first standard (originally called simply the OPC Specification and now called the Data Access Specification) resulted from the collaboration of a number of leading worldwide automation suppliers working in cooperation with Microsoft. Originally based on Microsoft's OLE COM (component object model) and DCOM (distributed component object model) technologies, the specification defined a standard set of objects, interfaces and methods for use in process control and manufacturing automation applications to facilitate interoperability. The COM/DCOM technologies provided the framework for software products to be developed. There are now hundreds of OPC Data Access servers and clients available.

Adding the OPC specification to Microsoft's OLE technology in Windows allowed standardization. Now the industrial devices' manufacturers could write the OPC DA Servers and the software (like Human Machine Interfaces HMI ) could become OPC Clients.

The benefit to the software suppliers was the ability to reduce their expenditures for connectivity and focus them on the core features of the software. For the users, the benefit was flexibility. They don't have to create and pay for a custom interface. OPC interface products are built once and reused many times, therefore, they undergo continuous quality control and improvement.

The user's project cycle is shorter using standardized software components. And their cost is lower. These benefits are real and tangible. Because the OPC standards are based in turn upon computer industry standards, technical reliability is assured.

The original specification standardized the acquisition of process data. It was quickly realized that communicating other types of data could benefit from standardization. Standards for Alarms & Events, Historical Data, and Batch data were launched.

Current and emerging OPC Specifications include:

Specification
Description
OPC Data Access
The originals! Used to move real-time data from PLCs, DCSs, and other control devices to HMIs and other display clients. The Data Access 3 specification is now a Release Candidate. It leverages earlier versions while improving the browsing capabilities and incorporating XML-DA Schema.
OPC Alarms & Events
Provides alarm and event notifications on demand (in contrast to the continuous data flow of Data Access). These include process alarms, operator actions, informational messages, and tracking/auditing messages.
OPC Batch
This specification carries the OPC philosophy to the specialized needs of batch processes. It provides interfaces for the exchange of equipment capabilities (corresponding to the S88.01 Physical Model) and current operating conditions.
OPC Data eXchange
This specification takes us from client/server to server-to-server with communication across Ethernet fieldbus networks. This provides multi-vendor interoperability! And adds remote configuration, diagnostic and monitoring/management services.
OPC Historical Data Access
Where OPC Data Access provides access to real-time, continually changing data, OPC Historical Data Access provides access to data already stored. From a simple serial data logging system to a complex SCADA system, historical archives can be retrieved in a uniform manner.
OPC Security
All the OPC servers provide information that is valuable to the enterprise and if improperly updated, could have significant consequences to plant processes. OPC Security specifies how to control client access to these servers in order to protect this sensitive information and to guard against unauthorized modification of process parameters.
OPC XML-DA
Provides flexible, consistent rules and formats for exposing plant floor data using XML, leveraging the work done by Microsoft and others on SOAP and Web Services.
OPC Complex Data
A companion specification to Data Access and XML-DA that allows servers to expose and describe more complicated data types such as binary structures and XML documents.
OPC Commands
A Working Group has been formed to develop a new set of interfaces that allow OPC clients and servers to identify, send and monitor control commands which execute on a device.

Tuesday, July 08, 2008

What are Temperature Transmitters

Temperature measurement using modern scientific thermometers and temperature scales goes back at least as far as the early 18th century, when Gabriel Fahrenheit adapted a thermometer (switching to mercury) and a scale both developed by Ole Christensen Røemer. Fahrenheit's scale is still in use, alongside the Celsius scale and the Kelvin scale.Many methods have been developed for measuring temperature. Most of these rely on measuring some physical property of a working material that varies with temperature. One of the most common devices for measuring temperature is the glass thermometer. This consists of a glass tube filled with mercury or some other liquid, which acts as the working fluid. Temperature increases cause the fluid to expand, so the temperature can be determined by measuring the volume of the fluid. Such thermometers are usually calibrated, so that one can read the temperature, simply by observing the level of the fluid in the thermometer. Another type of thermometer that is not really used much in practice, but is important from a theoretical standpoint is the gas thermometer.

Temperature transmitters, RTD, convert the RTD resistance measurement to a current signal, eliminating the problems inherent in RTD signal transmission via lead resistance. Errors in RTD circuits (especially two and three wire RTDs) are often caused by the added resistance of the leadwire between the sensor and the instrument. Transmitter input, specifications, user interfaces, features, sensor connections, and environment are all important parameters to consider when searching for temperature transmitters, RTD.Transmitter input specifications to take into consideration when selecting temperature transmitters, RTD include reference materials, reference resistance, other inputs, and sensed temperature. Choices for reference material include platinum, nickel or nickel alloys, and copper. Platinum is the most common metal used for RTDs - for measurement integrity platinum is the element of choice. Nickel and nickel alloys are very commonly used metal. They are economical but not as accurate as platinum. Copper is occasionally used as an RTD element. Its low resistivity forces the element to be longer than a platinum element. Good linearity and economical. Upper temperature range typically less than 150 degrees Celsius. Gold and Silver are other options available for RTD probes - however their low resistivity and higher costs make them fairly rare, Tungsten has high resistivity but is usually reserved for high temperature work. When matching probes with instruments - the reference resistance of the RTD probe must be known. The most standard options available include 10 ohms, 100 ohms, 120 ohms, 200 ohms, 400 ohms, 500 ohms, and 1000 ohms. Other inputs include analog voltage, analog current, and resistance input. The temperature range to be sensed and transmitted is important to consider.Important transmitter specifications to consider when searching for temperature transmitters, RTD, include mounting and output. Mounting styles include thermohead or thermowell mounting, DIN rail mounting, and board or cabinet mounting. Common outputs include analog current, analog voltage, and relay or switch output. User interface choices include analog front panel, digital front panel, and computer interface. Computer communications choices include serial and parallel interfaces. Common features for temperature transmitters, RTD, include intrinsically safe, digital or analog display, and waterproof or sealed. Sensor connections include terminal blocks, lead wires, screw clamps or lugs, and plug or quick connect. An important environmental parameter to consider when selecting temperature transmitters, RTD, is the operating temperature.

Friday, July 04, 2008

What is a Control System?

In the case of linear feedback systems, a control loop, including sensors, control algorithms and actuators, is arranged in such a fashion as to try to regulate a variable at a setpoint or reference value. An example of this may increase the fuel supply to a furnace when a measured temperature drops. PID controllers are common and effective in cases such as this . Control systems that include some sensing of the results they are trying to achieve are making use of feedback and so can, to some extent, adapt to varying circumstances. Open-loop control systems do not directly make use of feedback, but run only in pre-arranged ways.

Pure logic controls were historically implemented by electricians with networks of relays, and designed with a notation called ladder logic. Nowadays, most such systems are constructed with programmable logic controllers.

Logic controllers may respond to switches, light sensors, pressure switches etc and cause the machinery to perform some operation. Logic systems are used to sequence mechanical operations in many applications. Examples include elevators, washing machines and other systems with interrelated stop-go operations.

Logic systems are quite easy to design, and can handle very complex operations. Some aspects of logic system design make use of Boolean logic.

Controller System for Industrial Automation

The element linking the measurement and the final control element is the controller. Before the advent of computers, the controllers are usually single-loop PID controllers. These are manufactured to execute PID control functions. These days, the controllers can do a lot more, however, easily 80 to 90% of the controllers are still PID controllers.


Analogue vs Digital Controllers
It is indeed difficult to say that analogue controllers are definitely better than digital controllers. The point is, they both work. Analogue controllers are based on mechanical parts that cause changes to the process via the final control element. Again like final control elements, these moving parts are subjected to wear and tear over time and that causes the response of the process to be somewhat different with time. Analogue controllers control continuously.

Digital controllers do not have mechanical moving parts. Instead, they use processors to calculate the output based on the measured values. Since they do not have moving parts, they are not susceptible to deterioration with time. Digital controllers are not continuous. They execute at very high frequencies, usually 2-3 times a second.

Analogue controllers should not be confused with pneumatic controllers. Just because a controller is analogue does not mean it is pneumatic. Pneumatic controllers are those that use instrument air to pass measurement and controller signals instead of electronic signals. An analogue controller can use electronic signals. Compared to pneumatic controllers, electronic controllers (can be analogue or digital) have the advantage of not having the same amount of deadtime and lag due to the compressibility of the instrument air.

Wednesday, July 02, 2008

Measurements

Measurement


Measurements have got to be one of the most important equipment in any processing plant. Any decision made on what the plant should do is based on what the measurements tell us. In the context of process control, all controller decisions are similarly based on measurements.

With the advent of computers, it is now possible to do inferential measurements, meaning telling the value of a parameter without actually measuring it physically. It should however, be remembered that inferential measurement algorithms are also based on physical measurements. Therefore, rather than rendering measurements redundant, they have made measurements all the more important.


Pressure Measurement

The measurement of pressure is considered the basic process variable in that it is utilized for measurement of flow (difference of two pressures), level (head or back pressure), and even temperature (fluid pressure in a filled thermal system).

All pressure measurement systems consist of two basic parts: a primary element, which is in contact, directly or indirectly, with the pressure medium and interacts with pressure changes; and a secondary element, which translates this interaction into appropriate values for use in indicating, recording and/or controlling.


An electronic-type transmitter is shown in the figure above. This particular type utilizes a two-wire capacitance technique.

Process pressure is transmitted through isolating diaphragms and silicone oil fill fluid to a sensing diaphragm in the center of the cell. The sensing diaphragm is a stretched spring element that deflects in response to differential pressure across it. The displacement of the sensing diaphragm is proportional to the differential pressure. The position of the sensing diaphragm is detected by capacitor plates on both sides of the sensing diaphragm. The differential capacitance between the sensing diaphragm and the capacitor plates is converted electronically to a 4-20 mA dc signal.


Flow Measurement

Numerous types of flowmeters are available for closed-piping systems. In general, the equipment can be classified as differential pressure, positive displacement, velocity and mass meters.

Differential pressure devices include orifices, venturi tubes, flow tubes, flow nozzles, pitot tubes, elbow-tap meters, target meters, and variable-area meters.

Positive displacement meters include piston, oval-gear, nutating-disk, and rotary-vane types. Velocity meters consist of turbine, vortex shedding, electromagnetic, and sonic designs.

Mass meters include Coriolis and thermal types. The measurement of liquid flows in open channels generally involves weirs and flumes.


Temperature Measurement

How can I measure temperature?

Temperature can be measured via a diverse array of sensors. All of them infer temperature by sensing some change in a physical characteristic. Six types with which the engineer is likely to come into contact are: thermocouples, resistive temperature devices (RTDs and thermistors), infrared radiators, bimetallic devices, liquid expansion devices, and change-of-state devices.


Sunday, June 29, 2008

Why Calibrate ? Or "Calibration? How does that help me?"

British scientist Lord Kelvin (William Thomson 1824-1907) is quoted from his lecture to the Institution of Civil Engineers, 3 May 1883...
"I often say that when you can measure what you are speaking about and express it in numbers you know something about it; but when you cannot express it in numbers your knowledge is a meager and unsatisfactory kind; it may be the beginning of knowledge but you have scarcely, in your thoughts, advanced to the stage of science, whatever the matter may be."

This famous remark emphasizes the importance that measurement has in science, industry and commerce. We all use and depend upon it every day in even the most mundane aspects of life -- from setting your wristwatch against the radio or telephone time signal, to filling the car fuel-tank or checking the weather forecast. For success, all depend upon proper calibration and traceability to national standards.

As components age and equipment undergoes changes in temperature or sustains mechanical stress, critical performance gradually degrades. This is called drift. When this happens your test results become unreliable and both design and production quality suffer. Whilst drift cannot be eliminated, it can be detected and contained through the process of calibration.

Calibration is simply the comparison of instrument performance to a standard of known accuracy. It may simply involve this determination of deviation from nominal or include correction (adjustment) to minimize the errors. Properly calibrated equipment provides confidence that your products/services meet their specifications. Calibration:

* increases production yields,
* optimizes resources,
* assures consistency and
* ensures measurements (and perhaps products) are compatible with those made elsewhere.

By making sure that your measurements are based on international standards, you promote customer acceptance of your products around the world. But if you're still looking to justify that the cost of calibration does add value, check-out some of the calibration horror stories that have been reported.

Saturday, June 28, 2008

Paperless Calibration

Bulging filesBulging filing cabinets or over-full hanging files are a common office scene. But as far as calibration records are concerned, is the "paperless office" taboo ?

What do You Keep in Your Drawers ?

Most quality managers keep calibration results and certificates in their drawers! From discussions that have taken place with many people in industry whose responsibility includes the control of test instruments, a filing cabinet full of paper calibration ‘evidence’ is an integral part of the quality system -- without which audits would fail and the business would crumble. But when pressed for a rationale for such belief, three main reasons to maintain paper records emerge:

  • They believe that auditors would not accept any alternative
  • They believe that ISO9000 or accreditation agencies demand it
  • It is historical; they have always done it and it is a comfort factor.
Alternative Feared

During these discussions a potential alternative option based around electronic records being retained by the calibration supplier, to be provided electronically on demand, was met with a mixed reaction.

On one hand, the positive aspects of fewer papers to handle, file, retain, refresh, retrieve, etc. were enthusiastically supported. However, the conflicting dilemma that such a change might have an impact on audit success tempered that initial enthusiasm. Equipment managers' fundamental belief is that both ISO/IEC Guide 25 or EN45001 (ISO17025) and ISO9000 audit bodies would not recognize or be comfortable with such a ‘virtual’ record system. This fear alone would deter them from seriously considering any such change.

This collective feedback formed the basis of a discussion between Hewlett-Packard in Britain and senior officials from the United Kingdom Accreditation Service, the agency responsible for both accrediting calibration/test labs and overseeing quality management system registrars. The goal was to establish, for the record, whether UKAS would endorse a paperless system. The outcome of this meeting is summarized in a letter to Agilent Technologies from Brian Thomas, Technical Director of UKAS, in which he summarizes that the responsibility of the user of calibration services (the customer)

"....is to be able to demonstrate to the assessor that it can, and does when needed, obtain evidence of calibration and that it has an effective records system enabling tracking back of full calibration data and certification for the defined period."

This doesn’t mean that records are necessarily kept locally by the equipment-user in paper form but that they could, indeed, be retained by the supplier of the service and provided when needed at any time in the future. In most cases, the only data a company needs in real-time relates to parameters found to be outside the instrument’s specification when initially tested (on-receipt status) so that a potential product-recall process may be invoked. But even this doesn’t need to be provided on paper -- it could be made available to the customer via the Internet (e.g. e-mail or a secure web server) or through a variety of other electronic means (fax, floppy disk, etc.).

Control is Crucial not Mechanism

Whichever medium is most appropriate, it is the evidence of control that is imperative, not the evidence of paperwork. In Brian’s words:

"In principle, your customers would be able to contract you to retain their calibration records; this arrangement would then become part of their system for retention of records. UKAS assessment of such a customer would address whether this system provided access that was easy, quick and reliable and controlled from the point of view of security, confidentiality and accuracy. Assuming this to be so in practice then the system would be acceptable to UKAS."

This alternative solution is, therefore, one which UKAS would support provided that the customer and the supplier met some key requirements. Those requirements are concisely detailed by Brian as:

"The documentation of such records and certification is acceptable in any form of medium, hard copy, electronic, etc. provided that it is legible, dated, readily identifiable, retrievable, secure and maintained in facilities that provide a suitable environment to minimize deterioration or damage and to prevent loss."

Dispelling Reluctance

So, the voice of industry is clear. It would like to take advantage of contemporary technology by contracting-out its data and certificate storage requirements and, provided that their suppliers could satisfy their needs (echoed by the needs of UKAS above), they are willing to forego historical practices by trusting virtual documentation. But the most significant reason that they are reluctant to take this step is fear of audit failure.

Agilent Technologies believe that a major step forward would be made if quality system and accreditation consultants and assessors could advise their clients that, far from impeding audit success, such a move could enhance it -- whilst at the same time saving space, time and ultimately money for both the equipment owner and calibration provider.

Calibration, Verification or Conformance?

When discussing calibration requirements with a potential supplier it's obviously important to understand what's being offered. Other articles in this section should help you to establish your requirements and distinguish the differences between available services. But one of the variations has, sometimes, even confused calibration laboratories and quality auditor. It's a matter of the difference between calibration, verification and even conformance.

Similar to the often confused specification terms accuracy and precision, a myth became "established wisdom" that calibration and verification are differentiated on the basis of quality or integrity.

* Specification terminology

Popular opinion being that verification is a quick-check of performance perhaps made without any real traceability, whereas calibration provides genuine assurance that the product really meets its specification. In fact, the US national standard ANSI/NCSL-Z540 defines "verification" as being calibration and evaluation of conformity against a specification. This definition originated with the now obsolete ISO/IEC Guide 25 but neither its replacement (ISO/IEC 17025) or the International Vocabulary of Measurement (VIM) currently have it or any alternative. The only relevant international standard that includes terminology covering the process of both calibrating and evaluating a measuring instrument's performance against established criteria is ISO10012 which uses the rather cumbersome term "metrological confirmation".

Calibration is simply the process of comparing the unknown with a reference standard and reporting the results. For example:
Applied= 1.30V, Indicated= 1.26V (or Error= -0.04V)
Calibration may include adjustment to correct any deviation from the value of the standard.

Verification, as it relates to calibration, is the comparison of the results against a specification, usually the manufacturer's published performance figures for the product. (e.g. Error= -0.04V, Spec= �0.03V, "FAIL"). Some cal labs include a spec status statement on their Certificate of Calibration. (i.e. the item did/did not comply with a particular spec).

Where no judgment is made about compliance, or correction has not been made to minimize error, it has been suggested that Certificate of Measurement would be a more descriptive title to aid recognition of the service actually performed. Some suppliers also use Certificate of Verification where no measurements are involved in the performance testing (such as for certain datacomm/protocol analyzers), rather than Certificate of Functional Test as this latter term is often perceived as simply being brief, informal checks as might be performed following a repair (often termed "operational verification").

Verification can also relate to a similar evaluation process carried out by the equipment user/owner where the calibration data are compared to allowances made in the user's uncertainty budget (e.g. for drift/stability between cals) or other criteria such as a regulation or standard peculiar to the user's own test application.

Verification is not intermediate self-checking between calibrations. Such checks are better termed confidence checks, which may also be part of a Statistical Process Control regime. The results of confidence checks may be used to redefine when a "proper" calibration is required or may prompt modification of the item's working spec as assigned by the user.

But what about conformance, especially regarding the meaning of a Certificate of Conformance ? Typically available when an instrument is purchased, it is now generally recognized that such a document has little value as an assurance of product performance. Of course, the manufacturer expects that the product conforms to its spec but, in this sense, the document simply affirms that the customer's purchase order/contract requirement has been duly fulfilled.

Thursday, June 26, 2008

Uncertainty Myths

MYTH TRUTH
—Uncertainty—
ISO17025 requires that measured values and measurement uncertainty is reported on a certificate. This is true if the certificate does not include a statement concerning the equipment's compliance to a stated specification. In this case, section 5-10-4 says that the results and uncertainty must be maintained by the lab.
We need to determine our own measurement uncertainty so need to know the calibration lab's uncertainty.

If the calibration confirmed that the instrument met the manufacturer's specification, the effect of uncertainty on that status decision has already been taken into account (as required by ISO17025, para.5-10-4-2). In this case, the user's own uncertainty budget starts with the product specification and the calibration uncertainty is not included again.

If the calibrated item does not have a specification (i.e. the certificate provides only measured values) then the cal lab's uncertainty will need to be included in the user's own uncertainty analysis.

The need to know "uncertainty" is new. We've been certified against ISO9001:1994 for years and have never been asked before.

You've just been lucky or were satisfactorily meeting the requirement without realizing it !

Look again at clause 4-11-1; it clearly states that "...equipment shall be used in a manner which ensures that the measurement uncertainty is known and is consistent with the required measurement capability."

For the majority of instrument users, the requirement is readily satisfied by referring to the equipment specifications. In general terms, the specification is the user's uncertainty.

The uncertainties that an accredited lab will report on a certificate are published in their Scope/Schedule. The published capability represents the best (smallest possible) measurement uncertainties, perhaps applicable to particular characteristics and types of tested equipment. It's very unlikely that those figures would be assigned to all calibrations made assuming a wide variety of models are seen. Until measurements are made, it may not be possible for the cal lab to estimate the uncertainty that will be assigned because the unit-under-test contributes to the uncertainty.

Published "best measurement uncertainty" can never be achieved because it assumes an ideal unit-under-test.

In the past there have been different practices allowed by the various conformity assessment schemes. However, the European co-operation for Accreditation publication EA-4/02 (refer to Uncertainty Resources in this Basics section) recognizes that harmonization was required and, in Appendix A, establishes definitions.

This means that, certainly within Europe, best measurement uncertainty (BMC) must include contributions associated with the normal characteristics of equipment they expect to calibrate. For example, it's not acceptable to base the uncertainty of an attenuation measurement on a device having an assumed perfect match. Some BMC's are qualified with the phrase "nearly ideal" regarding the test item but this means that the capability does not depend upon the item's characteristics and that such perfect items are available and routinely seen by the lab.

Calibrations without uncertainty are not traceable. It is true that the internationally agreed definition of traceability includes a need for the uncertainty of the comparisons to be stated. However, it doesn't mean that a calibration certificate must include uncertainty (or measured values), as is allowed by ISO17025 and other standards if a specification compliance statement is used, although this information must be maintained by the lab.
By using a correction based on the instrument's error as determined by calibration, the working specification can be tightened. This effectively minimizes the user's own measurement uncertainty to that of the calibrating lab.

The equipment manufacturer specifications cannot be ignored. For instance, they include allowances for drift over time and environmental conditions. In contrast, the calibration represents a performance assessment at that time and in particular conditions. Yet the myth dangerously assumes that the "error" is constant despite these variables.

Calibration Myths

MYTH TRUTH
—Calibration—

A Certificate of Calibration means that the instrument met its specification, at least when it was tested.

ALSO

Calibration means that the equipment was adjusted back to nominal.

Whether this is correct or not depends on the calibration laboratory's service definitions or what was agreed between the supplier and customer. The international meaning of "calibration" does not require that errors detected by the measurement comparison process are corrected. It means that adjustment to return an item to specification compliance may, or may not, be performed.

Unless the Certificate contains a statement affirming that the item met the published specification it is merely a report of the measurements made. In this case it is left to the equipment user to review the data against requirements. The equipment may have been found and returned to the user out-of-tolerance!

Some equipment is more expensive to have calibrated than to purchase a new one each year. Just scrap the old item which was probably worn anyway.

The first part of this assertion is TRUE but....
It could be that a calibration certificate is not provided with the new purchase. Some users are not concerned, perhaps relying upon the manufacturer's reputation to deliver new products that are specification-compliant which may be a justifiable risk.

Less justifiable is the suggested practice to dispose of the old item without first getting it calibrated. How would you know if it had been used in an out-of-tolerance condition? If it had been out-of-spec, would it affect the integrity or quality of the process or end-product? If so, the proposal is a false economy !

Only measuring equipment with the possibility of adjustment needs periodic calibration. As an example, liquid-in-glass thermometers only need certification when first put into service; they either work or are broken.

Just because an item is not adjustable doesn't mean that it's perfectly stable. Some standards may be subject to wear which changes their value (e.g. a gauge block) or they may be invisibly damaged leading to non-linear or odd behavior (e.g. a cracked glass thermometer).

Or the material from which they are constructed may also not be stable. For example a quartz crystal oscillator changes its resonant frequency because mechanical stress in the crystalline structure is released over time.

If an item needs routine calibration, the manufacturer states what is necessary in the equipment's handbook; otherwise calibration isn't required.

It is true that some manufacturers provide such advice (Agilent Service Manuals spring to mind ! ). But many, typically smaller, companies do not make this investment. It's unsafe to make the assumption that no advice means no calibration.

Also be aware that industry practices change over time and a manufacturer's recommendations as published thirty years ago may not be as metrologically rigorous as those produced to match today's market expectations.

The original manufacturer or the calibration lab defines the appropriate calibration interval for the product or item. The user is bound by that periodicity.

It's often unrecognized that a product's specification is generally linked to a time period. Simplistically, the manufacturer may establish the specification having assessed the accuracy and drift of prototype units. It may well be statistically justified for a particular confidence level that a certain percentage of the product population (all those produced) are likely to still comply with the spec after the stated period. Whatever the mechanism used, the calibration interval is only a recommendation.

Some cal labs offer a service to manage the periodicity of customers' equipment based on the accumulated cal history. Otherwise, this risk management responsibility remains with the user.

Safety regulations stipulate the legal maximum period allowed between cals to be one year.

The problem with such a policy is that it may be implemented differently to what is intended. Maybe all items will be assigned a one year interval without any regard for its justification or applicability to the use of a particular piece of equipment?

The assignment of a suitable interval should be recognized as part of an equipment user's risk management strategy. One must consider the knock-on effects if the item is later found to have been used in an out-of-tolerance condition (e.g. product recall costs). So, there's a balance to be achieved between the inconvenience and cost of excessive calibration and impact of unreliable kit.

In safety-critical applications any degree of risk may be unacceptable but this would probably be implemented by parallel and back-up systems. Total reliance upon a single piece of equipment, even if tested every day, would be unusual.

Standards Myths



MYTH TRUTH
—Standards—
ISO17025 states that it's equivalent to ISO9000 so ISO9000 must be equivalent to ISO17025.

ISO17025 does indeed state, in its Introduction and in paragraph 1-6, that compliance with the standard means that the laboratory's quality system for their calibration or testing activities also meets the criteria of ISO9001/2. Two points to emphasize though:

  1. The activities of many service providers extends beyond just calibration or testing (e.g. repair, supply of parts, training, etc.) where 17025 does not apply.
  2. The equivalence is to the 1994 version of the ISO9000 standards which was superseded in late 2000.
My factory's quality system complies with ISO9000 so all my equipment must be calibrated "Before & After"adjustment.

A calibration service that provides assessment of the product's performance on-receipt and, if necessary, after adjustment or repair has been completed has two purposes.

  1. It enables analysis of the equipment's stability over time.
  2. More significantly, if the on-receipt performance did not meet the user's accuracy requirements, an investigation of its impact can be triggered that may result in product or work recall.

These possibilities need only apply to equipment affecting the quality of the factory's product or service, for example that used for alignment or end-of-line inspection. Understanding the distinction can save a lot of money !

Accreditation agencies define the extent of testing for various products so that users can have confidence in their equipment's overall performance.

In some countries there are national and regulatory standards that are applicable to some measuring equipment. These usually relate to legal metrology (i.e. measurements made in the course of consumer trade) or statutory codes (e.g. safety) or certain sectors of industry.

However, accreditation bodies do not stipulate that these must be used although labs would generally do so where applicable. Also, there are no standards concerning the typical general purpose instruments that may be used in the electronics industry, for example.

Although accreditation criteria includes a need for
calibration certificates to draw attention to limitations in the scope of testing performed versus the product's capability, it is left to the client and supplier to agree the content of the service. Whether the calibration utilizes any recommendations of the equipment's manufacturer is part of this negotiation.

My calibration supplier is ISO17025 accredited so all the calibrations they undertake meet that standard.

The results of a calibration performed under the scope of the accreditation are reported on a certificate bearing the authorized brand-mark of the accreditation program.
For commercial reasons, most accredited laboratories offer at least two calibration service levels -- a certificate with the accreditation logo or a company-proprietary
certificate.

The processes used to undertake the calibration and the extent of testing may be the same in both cases, or may differ. Some accreditation programs allow the inclusion of (a minority of) measurements which are not within the lab's accredited capability, providing they are clearly identified as non-accredited.

Results which are simply reported as "Pass" or "Fail" are not acceptable.

Recording of numerical measurement data is not relevant for some tests. This may be because it's of the "go, no go" type (e.g. checking a bore using a plug gauge) or because the test procedure establishes known test conditions and looks for satisfactory response in the unit-under-test (e.g. checking input sensitivity of a frequency counter by applying a signal whose amplitude equals the specified sensitivity and noting whether stable triggering is observed).

To summarize, pass/fail is valid where the decision criteria is defined (i.e. specification limits).

A supplier that has an ISO9000 certificate is good enough. This may be reasonable but questions concerning the scope of the certification should be asked. If the quality system that was assessed related to a company's pressure sensor manufacturing operation in Chicago, how much assurance does that endow on micrometer service at their Dallas repair office? Possibly none! The scope of registration is explicit in coverage.
Only accredited calibrations are traceable to national standards.

Traceable measurements are those supported by records that can demonstrate an unbroken series of calibrations or comparisons against successive standards of increasing accuracy (reducing uncertainty) culminating in a recognized national metrology institute.

Measurement traceability is, of course, also reviewed as part of an ISO9000 quality system certification.

My own testing laboratory is accredited against ISO17025 so our instruments must be calibrated at an accredited lab.

This may depend upon the interpretation of the standard by the particular accreditation body. Clause 5-6-2-1-1 of ISO17025 does not actually stipulate that traceability must only be obtained from an accredited facility only that the supplier "can demonstrate competence, measurement capability and traceability".

The British accreditation agency has confirmed that it will not add supplementary requirements to the 17025 criteria. It also accepts the possibility of traceability to a non-accredited source provided that sufficient evidence is available to UKAS to confirm that the supplier complies with the standard and that the lab being audited by UKAS has the critical technical competence to make such an assessment.

Uncertainty Made Easy

About this Article
This paper by Ian Instone was first presented at the Institution of Electrical Engineers, London in October 1996 at their Colloquium entitled "Uncertainties made easy".
Note: Details of the current versions of recommended uncertainty guidance publications can be found on the Uncertainty Resources page in this section.


Simplified Method for Assessing Uncertainties in a Commercial, Production Environment

Introduction

With the introduction of Edition 8 of NAMAS document NIS3003 (1) and the inclusion of the principles outlined in the ISO Guide to the Expression of Uncertainty in Measurement (2) the assessment of uncertainties of measurement has become a task more suited to a mathematician rather than the average calibration engineer. In some companies with small calibration departments it might be possible for all of the engineers to be re-educated in assessment of uncertainties, however, in larger laboratories it is more usual for various engineers to become specialist in certain aspects of the calibration process. This paper aims to demonstrate a simplified approach to uncertainty assessment which falls broadly within the guidelines set out in both NIS3003 and the ISO Guide.

One of the first stumbling blocks in NIS3003 is the necessity to derive a measurement equation. Whilst it is agreed that this is a useful skill which might demonstrate a more thorough understanding of the measurement principles, it seems only to serve as an additional step in the uncertainty assessment process, steps which were not thought necessary in previous 7 editions of NIS3003. The next step, deriving sensitivity coefficients by the use of partial differentiation will cause most calibration engineers to reach for the mathematics text book. Fortunately, in many cases these two steps can be replaced using a more practical approach. A list of contributions to the uncertainty budget can be used in place of the measurement equation and each term may be partially differentiated by varying the quantity over its range and measuring its influence on the measurand. For instance, it may have been determined that temperature variations have an influence upon the quantity being measured then, rather than produce a measurement equation which includes temperature and partially differentiate it, one can simply perform the measurement, change the temperature by the specified amount and re-measure. The resultant change in the measurand becomes a contribution to the uncertainty budget. There are also cases where the same approach may be used but where there may be no necessity to perform the measurements to obtain the data. For instance, many resistors have a temperature coefficient specification, in the form of ±N parts per million per degree Celsius. Assuming the temperature is controlled to within ±2°C the change in the value of the resistor due to temperature fluctuations will be given by:-

N parts per million x 2°C

Most contributions to an uncertainty budget can be assessed using either method. The practical method described will often yield smaller values because they are based on measurements performed on only a small quantity of items, where as the latter method is based upon the equipment specification which should cover the entire population of that instrument and so will normally produce larger contributions to the uncertainty budget.

Type-A Uncertainties

In a commercial calibration laboratory often it is not economical to perform several sets of measurements on a particular instrument solely to produce a value for the random (Type-A) uncertainty contribution. The alternative method shown in NIS3003 is preferred and usually employed where possible. In cases where multiple measurements are performed it is usual practice to calculate the standard deviation of the population. The estimated standard deviation of the uncorrected mean of the measurand is then calculated using:

Esd = Psd / sqrt(N)

Where:
Esd is the estimated standard deviation of the uncorrected mean of the measurand
Psd is the standard deviation of the population of values
N is the quantity of repeated measurements

When the quantity of measurements to be performed on the equipment being calibrated is limited to one set of measurements then N in the equation above will be 1. The standard deviation of the population Psd will previously have been determined from an earlier Type-A evaluation based upon a large number of repeated measurements. In an ideal world the measurements would be repeated on several instruments within the family and the worst case standard deviation used in the Type-A assessment. In practice however, providing the assessment techniques outlined in this paper are employed, the Type-A contribution to the uncertainty budget can often be shown to be negligible so the need to make a very large number of repeated measurements is reduced.

In the ideal world where customers are willing to pay unlimited amounts of money for their calibrations, or where we have very large quantities of similar instruments to calibrate it is a fairly simple matter to measure several instruments many times and obtain a good reliable estimate for the standard deviation. In reality, customers have limited budgets and calibration laboratories rarely have even small quantities of particular instruments which can be used for extensive testing to provide a good and reliable estimate of the standard deviation. Another simpler, and more cost effective method is required.

Before embarking upon the assessment of uncertainties we need to understand exactly what our customer is expecting of their calibration report and what use they will make of it. For the majority of simple reference standards such as resistors, standard cells, capacitors etc. it is likely that the measured values will be used by the customer so an uncertainty assessment as defined by NIS3003 will be required. For the great majority of instruments it is often not possible to make any use of the values obtained during its calibration so it is usually only necessary to provide a calibration which demonstrates that the instrument is operating within its specifications. In these cases it is usually not necessary to provide measurements with the lowest measurement uncertainties, which allows some compromises to be made.

ISO 10012-1 (3) suggests that we should aim for an accuracy ratio between uncertainty and the instrument being calibrated of greater than 3:1. The American interpretation of ISO Guide 25 (4), ANSI Z540-1 (5) suggests that uncertainties become significant when the accuracy ratio is less than 4:1. If we assume that the instrument specification has the same coverage factor as the uncertainty the following expression would describe the resultant combination of the uncertainty and specification which should be used when the instrument is used to make measurements:

§ = sqrt [ S 2 + U 2 ]

Where:
§ is the resultant expanded specification resulting from the calibration
S is the specification of the parameter being measured
U is the uncertainty of measurement when performing the calibration

In the cases where S >= 4U the effect of the uncertainty upon the specification is shown to be negligible, for instance assume that S = 8 and U = 2 then:

§ = sqrt [ 8 2 + 2 2 ]
= sqrt [ 64 + 4 ]
= sqrt [ 68 ]
= 8.25

Therefore, with an accuracy ratio of 4:1 the effective specification expands by 3.1%. As most uncertainties are only quoted using two figures it is unlikely that this small increase would have any effect. Repeating the same with an accuracy ratio of 3:1 produces an increase of only 5.2%.

The same analogy can be used when assessing the significance of a particular uncertainty contribution. Type-A uncertainties are those assessed using statistical methods usually based on many sets of measurements, thereby making them the most expensive to assess. Using the model above we can show that Type-A uncertainties are insignificant when they are less than 30% of the magnitude of the Type-B uncertainties:

Total Uncertainty = Type-B uncertainties where Type-A <>

and:

Effective Specification = Specification where Total Uncert <>

From above we can show that Type-A uncertainties can be regarded as insignificant when they are less than 0.09 of the specification being tested, or in approximate terms Type-A uncertainties can be regarded as negligible when they are less than 10% of the specification.

Verifying that an uncertainty contribution is less than a given value is usually much easier than assessing the precise magnitude of it. One method described in an earlier paper (6) normally requires only two complete sets of measurements to be made on the same instrument. One set of measurements are then subtracted, one measurement at a time from the other set. The largest difference is then assumed to be a conservative estimate of the Type-A uncertainty contribution. This technique has been verified many times against uncertainties assessed in the traditional way and has always produced an acceptable conservative estimate of the Type-A contribution, providing that an adequate quantity of measurements are compared across the range. Assuming that the comparison produces no values that are outside the limits defined earlier (10% of the DUT specification or 30% of the Type-B uncertainty estimate) it can be assumed that the Type-A uncertainties are not significant. To provide good confidence and consistency in the assessment process the value defined as insignificant should always be included in the assessment.

It is also possible to use values for the Type-A assessment gained from other, related instruments providing some knowledge of the construction of the instrument under test is available. For instance, it may be that a laboratory has already assessed a certain 50MHz to 18GHz signal generator and verified that the Type-A uncertainty contribution meets the criteria outlined above. A 12GHz signal generator from the same family is then submitted for assessment. In this case, providing the two signal generators share similar designs, and use similar hardware and layouts, and the same test methods and equipment are used it would be reasonable to employ the 18GHz Type-A assessment on both generators. In other cases it might be possible to refer to published data for certain Type-A contributions.

In cases where these techniques reveal that the Type-A contributions are significant (as defined above) the uncertainty assessment should be performed in the usual way using many repeated measurements.

Sensitivity Coefficient

In most cases sensitivity coefficients can be assumed to be 1. However there are some notable exceptions where other values will be used. One of these relates to the measurement of resolution bandwidth on a spectrum analyzer. In this case we have measurement uncertainties expressed in two different units; measurements of amplitude are expressed as an amplitude ratio (usually in dB units) and measurements of frequency (in Hz.). The bandwidth measurement is often performed by applying a "pure" signal to the analyzer's input and setting the controls so that the signal shown below is visible. The envelope describes the shape of the filter and normally we would measure the 3dB or 30% (below the reference) point of it (shown on the left of the figure below). To assess the sensitivity coefficient we need to determine the gradient of the graph at the measurement point. Spectrum analyzers often have an amplitude specification of 0.1dB per 1dB, therefore the amplitude uncertainty at 3dB will be ±0.3dB or ±7%. We then move ±7% from the 70% point and read off the resultant change in frequency.

The resultant change in frequency due to amplitude uncertainty is: ±3.8 frequency units. Since this value has been found for an amplitude specification of ±0.3dB it will have a sensitivity coefficient of 1.

Fig.1 -- Bandwidth measurement (determining the sensitivity coefficient)

On the right of the figure is a similar construction for assessing the frequency uncertainty due to the amplitude uncertainty when the 6dB (50%) point is measured. In this case the amplitude uncertainty increases to ±0.6dB (0.1x6). As a linear ratio this equates to ±13%.

Reading from the graph this represents a frequency uncertainty of ±6 frequency units.

Assessing the uncertainty contributions in this way greatly reduces the possibility of errors as might occur if following the theory using partial differentiation. In addition a practical technique such as this is preferred by most calibration engineers.

Other empirical means of obtaining values for the uncertainty budget may also be employed. For instance it might be possible to establish a value for temperature coefficient by changing the environmental temperature by a few degrees. In this case we could derive a sensitivity coefficient for the output signal in terms of temperature change.

Total Uncertainty Budget

One of the principle benefits of the latest revision of NIS3003 is the strong suggestion that all of the uncertainty contributions should be listed in a table along with their probability distribution. Whilst at first sight this seems a tedious task, it pays dividends in the future because it makes the contributors to the budget absolutely clear. The table below shows a typical example of an uncertainty assessment for a microwave power measurement at 18GHz using a thermocouple power sensor. These types of power sensor measure power levels relative to a known power so a 1mW, 50MHz power reference is included on the power meter for this purpose. In most cases it is simpler and more correct to use a measuring instruments specification rather than try to apply corrections and assess the resultant uncertainty. For the majority of measurements it is not possible to make corrections based upon a calibration report as that report only indicates the instruments calibration status at the time it was measured and only when operated in that particular mode described on the certificate. It is not possible to predict the errors at any other points.

Symbol

Source of Uncertainty

Value
± %

Probability Distribution

Divisor

Ci

Ui
± %

K

Calibration factor at 18 GHz

2.5

normal

2

1

1.25

D

Drift since last calibration

0.5

rectangular

sqrt(3)

1

0.29

I

Instrumentation Uncertainty

0.5

normal

2

1

0.25

R

50 MHz Reference spec.

1.2

rectangular

sqrt(3)

1

0.69

Mismatch loss uncertainties:

M1

Sensor to 50 MHz Reference

0.2

U-shaped

sqrt(2)

1

0.14

M2

Sensor to 18 GHz Generator

5.9

U-shaped

sqrt(2)

1

4.17


A

Type-A Uncertainties

2.1

normal

2

1

1.05








UC

Combined Standard Uncertainty


normal



4.55

U

Expanded Uncertainty


normal (k=2)



9.10

Where:

Ci is the sensitivity coefficient used to multiply the input quantities to express them in terms of the output quantity.

Ui is the standard uncertainty resulting from the input quantity.

The standard uncertainties are combined using the usual root-sum-squares method and then multiplied be the appropriate coverage factor (in this case k=2). In some cases it will be appropriate to use a different coverage factor, perhaps when a 95% confidence level is not adequate, or sometimes when the input quantities are shown to be "unreliable". The Vi (degrees of freedom of the standard uncertainty) or Veff (effective degrees of freedom) column has not been included in the table above in order to simplify the assessment process.

Degrees of Freedom

Degrees of freedom is a term used to indicate confidence in the quality of the estimate of a particular input quantity to the uncertainty budget. For the majority of calibrations performed under controlled conditions there will be no need to consider degrees of freedom and a coverage factor of k=2 will be used. In cases where the Type-A uncertainty has been assessed using very few measurements a different coverage factor, using the degrees of freedom, would normally be calculated. However, whilst the assessment method proposed in this paper is based on only two sets of measurements being performed experimental data confirms that this treatment (taking the worst case difference) produces a reliable, conservative estimate of the Type-A uncertainties. In most cases the degrees of freedom can be assumed to be infinite and the evaluation of the t factor using the Welch-Satterwaite equation would not be necessary. NIS3003 provides some guidance on using these methods but does stress that normally it is not necessary to employ them.

Conclusion

The uncertainty assessment method described in this paper have been employed at Hewlett-Packard's UK Service Center for several years. External, internal and informal measurement audits have in every case provided confirmation that the uncertainties are being estimated with the expected level of confidence. This simplified approach is easier to understand and use which enables more calibration engineers to contribute fully to the uncertainty assessments.

References
  1. The expression of Uncertainty and Confidence in Measurement for Calibrations, NIS3003 Edition 8 May 1995.
  2. Guide to the Expression of Uncertainty in Measurement. BIPM, IEC, IFCC, ISO, IUPAC, OIML. International Organization for Standardization. ISBN 92-67-10188-9. BSI Equivalent: "Vocabulary for Metrology, Part 3. Guide to the Expression of Uncertainty in Measurement", BSI PD 6461: 1995.
  3. Quality assurance requirements for measuring equipment, Part 1. Metrological confirmation system for measuring equipment, ISO 10012-1:1992.
  4. Calibration Laboratories and Measuring and Test Equipment - General Requirements ANSI/NCSL Z540-1-1994.
  5. General requirements for the competence of calibration and testing laboratories, International Organization for Standardization, ISO Guide 25:1990.
  6. Calculating the Uncertainty of a Single Measurement from IEE Colloquium on "Uncertainties in Electrical Measurements", 11 May 1993, Author Ian Instone, Hewlett-Packard.