The initial selection of a calibrator often is based on a specification sheet - a written description of the equipment's performance in quantifiable terms that applies to all calibrators having the same model number. Because specifications are based on the statistics of a large sample of calibrators, they describe the performance of a group rather than a single, specific calibrator. Any single calibrator would meet all the specifications and usually would significantly exceed most of the various specification details.

Good specifications have the following characteristics:
  • They are complete.
  • They are easy to interpret and use.
  • They include the effects found in normal usage such as environment and loading.
Completeness requires that sufficient information be provided so the user can determine the bounds of performance for all anticipated outputs (or inputs), all possible and permissible environmental conditions within those bounds, and all permissible loads.

Ease of use also is important. Many specifications can be confusing and difficult to interpret, thus causing mistakes in interpretations that can lead to application errors or faulty calibrations.

The requirement for completeness conflicts somewhat with that for ease of use; one can be traded for the other. The challenge of specification design is to mutually satisfy both, which is sometimes accomplished by bundling the effects of many error contributions within a useful, common window of operation. For example, the listed performance may be valid for a period of six months when used in a temperature range of 23°C I5°C, in humidity up to 80%, and for all loads up to a specified maximum rating. This is a great simplification for the user because the error contributions of time, temperature, humidity and load are included in the basic specification and can be ignored as long as operation is maintained within the listed bounds.

The Importance of Specifications

Comprehensive specifications are essential in maintaining a chain of traceability and ensuring global uniformity of products, quality and safety.

Traceability. Traceability is a term that refers to the fact that instruments have been proven to conform with the official standards for the parameters they measure. This means the measurements made by this equipment are traceable to national standards. A certified instrument is one that has been tested regularly by even better-performing certified devices. Specific test procedures are used, and the results are documented and must be repeated at specific time intervals. This chain sequence of comparing to superior performing, certified devices is repeated until, finally, specific comparisons are made with standards maintained by national authorities such as the National Institute of Standards and Technology (NIST) in the United States. This unbroken chain of comparisons often is called a "traceability chain."

For a process calibrator, traceability refers to the fact that the process calibrator's test and measurement functions have been verified to perform within its required specifications, and that this usage of the calibrator falls within the appropriate limits of performance, including signal levels, environmental conditions, conditions of use and time between performance verifications. The performance usually is checked using the procedures recommended by the manufacturer and the recommended types of superior performing equipment.

Traceable measurements ensure the uniformity and quality of manufactured goods and industrial processes. They are essential to the development of technology. Without traceable measurements, variances can occur in product/process quality that often prove costly. Bad quality is expensive, both in terms of cost to rectify and in damage to a company's reputation. Traceable measurements also support equity in trade as well as compliance to regulatory laws and standards.

The global acceptance of the ISO 9000 quality standards also has led to an increase in commercial requirements for the traceable calibration of test and measurement equipment. The purpose is to ensure that the products manufactured in one country will be acceptable in another on the basis of agreed-to measurement standards, methods and practices.

How Good Should a Calibrator's Specifications Be?

Whether doing instrument calibration, industrial process control or even product performance testing, the equipment performing the test must always have superior performance when compared to the tolerances of the test. Test Uncertainty Ratio (TUR) describes the ratio of test tolerance to the superior performance of the testing equipment. To eliminate undesired effects due to errors in the calibration equipment, it is desired that the test equipment's performance be 10 or more times better than the test tolerance limits. However, often this is not practical to achieve. Consequently, it has been shown that if the equipment is three to five times better than the test tolerance, then the calibration equipment error has no practical consequence. As a result, it is commonly accepted in industry that a four-to-one ratio is an adequate TUR. For example, if a transmitter with an accuracy specification of 1% is to be checked by a calibrator, the performance of the calibrator during the test must be better than or equal to 0.25% (thereby having a minimum of four times better performance than the transmitter).

Figure 1. A calibrator can be specified to super-high performance levels at the time of calibration; however, such levels are only good for a few minutes following calibration.

Calibrator Performance vs. Its Specifications

It must be understood that the published specifications of equipment apply to an entire population of equipment provided by a manufacturer, not just one individual piece of equipment. Consequently, an individual piece of equipment should not just marginally meet its published specifications but usually should perform much better than its published specification. Just how much better is determined by the philosophy and policies of the manufacturer.

Interpreting Specifications. Many companies have complex procedures and tests that a calibrator must pass prior to purchase and acceptance. But, before that evaluation can begin, one must decide which calibrators should be evaluated. The calibrator specifications usually are the first step in the process.

Ideally, specifications are a written description of a calibrator's performance that objectively quantifies its capabilities. It should be remembered that specifications do not equal performance - they are performance parameters. They can be conservative or aggressive. Manufacturers are not bound by any convention as to how they present specifications.

The buyer also should be aware that a calibrator specification applies to an entire product run of a particular instrument model. Because variation in the performance of individual calibrators from nominal tends to be distributed normally, a large majority of the units of a specific model should perform well within their specification limits. In fact, most individual calibrators can be expected to perform better than specified, although the performance of an individual calibrator should never be taken as representative of the model class as a whole.

The calibrator will most likely provide reliable performance, but there is always a small chance that its performance will be marginal, or even out of specification, at some parameter or function.

Accuracy vs. Uncertainty. Typically, the number on the cover of a data sheet or brochure will read "accuracy to 0.02%." This is commonly accepted usage equivalent to saying "measurement uncertainty of 0.02%." It means that measurements made with the device can be expected to be within 0.02% of the true value. In examining a specification, you need to be aware that a specification such as this:

  • Is often over the shortest time interval.

  • Is often over the smallest temperature span.

  • Is sometimes a relative specification.

  • May be derived using a nonconservative confidence level.

     



Figure 2. Out the specified range, a temperature coefficient is used to describe the degradation of the accuracy specification.

Consider the impact of each of these factors.

Time. Specifications usually include a specific time period during which the calibrator can be expected to perform as specified. Setting this time period or calibration interval is necessary to account for the drift rate inherent in a calibrator's analog circuitry. This is the calibration interval, or the measure of a calibrator's ability to remain within its stated specification for a certain length of time. Time periods of 30, 90, 180 and 360 days are common and practical. Figure 1 shows how a calibrator's uncertainty increases over time. When evaluating specifications, make sure you're comparing the same time intervals.

Any calibrator can be specified to super-high performance levels at the time of calibration. Unfortunately, such levels are good for only the first few minutes following calibration. If the specifications for a calibrator do not state the time interval over which they are valid, the manufacturer should be contacted for a clarification.

Temperature. Performance over the specified temperature range also is critical. Make sure the temperature intervals specified will meet your workload requirements.

The specified temperature range is necessary to account for the thermal coefficients in the calibrator's analog circuitry. The most common ranges are centered about room temperature, 23°C I5°C. This range reflects realistic operating conditions. Remember, temperature bounds must apply for the entire calibration interval. Thus, a temperature range specification of 23°C I1°C presumes very strict long-term control of the operating environment. Such a temperature range would not be representative of normal operation for a process calibrator.

Outside the specified range, a temperature coefficient (TC) is used to describe the degradation of the accuracy specification. The TC represents an error component that must be added to a calibrator's specification if it is being used outside of its nominal temperature range. A temperature coefficient graph is shown in figure 2.

In figure 2, as an example, the uncertainty as a function of temperature at full scale on the 11 V DC range of a calibrator is shown. The dashed line shows the specified accuracy for a 23°C I5°C temperature range. Within the span of the dashed line, the accuracy is within the specifications of 0.030% of full scale. This is in line with a specification of "0.025% of reading +0.005% of full scale when used at 23°C I5°C." This applies for a range of 18 to 28°C. Beyond this range, the instrument's performance degrades as shown by the solid line. TC usually will take the form:

TC = X %

°C

where X is the amount the performance degrades per change in degree beyond the base range specification. To calculate the accuracy due to temperatures outside of the given specification, the temperature modifier, Tmod, is needed. The formula is:

Tmod = |TC x gt|

where gt is operating temperature minus the temperature range limit, t is the proposed operating temperature, and range limit is the range limit that t is beyond.

If one wishes to use a calibrator in an ambient temperature outside of its specified range, the effects of TC must be added to the baseline accuracy specification when calculating the total accuracy. Tmod is used to calculate the total specification using the general formula:

Total Spec = (Basic Accuracy at a Specific Temperature Range) + Tmod

For example, suppose you have a calibrator that has rated accuracy of 0.030% at 23°C I5°C. Its TC is 0.0025%/°C. To calculate the accuracy of the calibrator for operation at 90°F (32°C):

t = 32

Range Limit = 23 + 5 = 28

Tmod = 0.0025% |32-28|

Tmod = 0.0025%|4| = 0.01%

total spec =0.030% +0.010%

= 0.040%



Table 1. Specifications must be conservative to ensure the calibrator is in tolerance at the end of its calibration interval.

As can be seen, the specification may change dramatically when the effects of performance due to temperature are considered.

Knowing how to calculate Tmod will be necessary when comparing two instruments that are specified for different temperature ranges. To truly compare the two calibrators, one needs to put them in the same terms (23°C I5°C) using the preceding calculation.

Most modern calibrators are specified to operate in wider temperature ranges because calibration instruments are no longer used only in the closely controlled laboratory. Calibration at the process plant demands greater temperature flexibility.

Allowance for Traceability to Standards. Uncertainty specifications also must be evaluated as relative or total. Relative uncertainty does not include the additional uncertainty of the reference standards used to calibrate the instrument. For example, when a calibrator's uncertainty is specified as relative to calibration standards, this covers only the uncertainty in the calibrator. This is an incomplete statement regarding the instrument's total uncertainty. Total uncertainty includes all uncertainties in the traceability chain: the relative uncertainty of the unit plus the uncertainty of the equipment used to calibrate it.

Confidence Level. The most critical factor in a calibrator's performance is what percentage of the calibrators will be out of calibration at the end of its calibration interval. Specifications must be conservative to ensure the calibrator is in tolerance - with a high degree of confidence - at the end of its calibration interval.



Figure 3. Identical calibrator performance can yield different specifications, depending on how aggressive the manufacturer chooses to be.

For example, say that vendors X and Y offer calibrators. Vendor X's specifications state that its calibrator can supply 10 V with an accuracy of 0.019%, and vendor Y's specification is 0.025% accuracy for the 10 V output. Neither of the data sheets for the calibrators supply a confidence level for the specifications, nor do they state how the accuracy is distributed.



When questioned, the vendors will state that their specifications are based on a normal distribution of accuracy and have the following confidence levels. Their responses are tabulated in table 1.

In this example, the actual performance of the calibrators is identical (figure 3). Vendor X, choosing a confidence level of 95%, is willing to risk 5% of their calibrators being found out of spec at the end of the stated time interval, and states a spec of 0.019%. The shaded and solid areas under the normal distribution curve represent the fraction of the calibrator population at risk. Vendor Y, choosing a confidence level of 99%, is willing to risk only 1% of their calibrators being found out of spec, and states a spec of 0.025%. The solid areas under the curve represent the fraction of the calibrator population at risk. So you see, identical calibrator performance can yield different specifications, depending on how aggressive the calibrator manufacturer chooses to be with the specifications of your calibrator.

Before making a purchase, it is critical to gain an understanding of a vendor's philosophy with respect to confidence level and ask the vendor to clarify the confidence level when there is doubt as to what it is.

Accuracy specifications are an important part of determining whether or not a particular calibrator will satisfy a need. There are, however, many other factors that determine which calibrator is best suited for an application, including the work load, support standards, level of manufacturer support and reliability.



Links