ccl logo
Metrology Humor
Get Quote
Customer Login
User ID
Forgot Login Info?


Definition 1): Metrology is the science of measurement.

Definition 2): Metrology is the science of weights and measures used to determine the conformance of an item to technical requirements. Metrology also includes the development of standards and systems for absolute and relative measurements.


Definition 1): Calibration is the comparison of instrument performance to a standard of known accuracy.

Definition 2): Calibration is the comparison of Measuring and Test Equipment (M&TE) or Measurement Standards (MS) with specified tolerances but of unknown accuracy to a Measurement Standard system of known and greater accuracy in order to detect, correlate, report, or minimize by adjustment or correction factor, all deviation from specified tolerance limits.


This definition is taken from the "Dictionary of Occupational Titles", an official guidebook produced by the Employment and Training Administration of the US Department of Labor.

"Develops and evaluates calibration systems that measure characteristics of objects, substances, or phenomena, such as length, mass, time, temperature, electric current, luminous intensity and derived units of physical or chemical measure. Identifies magnitude of error sources contributing to uncertainty of results to determine reliability of measurement process in quantitative terms. Redesigns or adjusts measurement capability to minimize errors. Develops calibration methods and techniques based on principles of measurement science, technical analysis of measurement problems and accuracy and precision requirements. Directs engineering, quality and laboratory personnel in design, manufacture, evaluation and calibration of measurement standards, instruments and test systems to insure selection of approved instrumentation. Advises others on methods of resolving measurement problems and exchanges information with other Metrologist personnel through participation in government and industrial standardization committees and professional societies."


As components age and equipment undergoes changes in temperature or sustains mechanical stress, critical performance gradually degrades. This is called drift. When this happens, your test results become unreliable and both design and production quality suffers. While drift cannot be eliminated, it can be detected and contained through the process of calibration. Properly calibrated equipment provides confidence that your products/services meet their specifications.

• Increases production yields
• Optimizes resources
• assures consistency
• Ensures measurements (and perhaps products) are compatible with those made elsewhere

And, by making sure that your measurements are based on international standards, you promote customer acceptance of your products around the world.


The simple concept behind calibration is that measuring equipment should be tested against a standard of higher accuracy. For any parameter/range we should be able to illustrate this type of hierarchical relationship:

National Standard.......................... Accurate to 0.002%
Calibration Laboratory........................................0.01%
Company "Master" Item.....................................0.07%
Company Production Equipment.............................1.0%
Produced Product..............................................10.0%

Of course, these calibrations need to be done on a planned, periodic basis with evidence of the comparison results maintained. This record must include identification of the specific standards used (which must be within their assigned calibration interval) and some means of knowing the method used and other test conditions. By examining these records, it should be possible to demonstrate an unbroken chain of comparisons that ends at the agency responsible for maintaining and developing a country's measurement standards (NIST). This demonstrable linkage to national standards, with known accuracy, represents "traceability".

In fact, it doesn't stop there because these laboratories routinely undertake international comparisons which help to establish worldwide consensus on the accepted value of the fundamental measurement units - without which, there would be little confidence in, for instance, successfully mating a 10 mm screw manufactured in one country with a 10 mm nut produced in another!

This refers to the calibration interval assigned to an item of equipment -- examples could be 3 months or perhaps 2 years. An alternative way of expressing this is the calibration cycle, usually how many calibrations are required per year. Equipment used in any Metrological situation must have known accuracy; that is, a specification assigned by the manufacturer or by the user. Since the performance of pretty much everything on Earth degrades with time, or use (or potential abuse), the expected accuracy must relate to a given period.

Despite what the salesman might tell you, no measurement can be guaranteed to be perfect! An uncertainty is a figure of merit associated with the actual measured value; the boundary limits within which the "true" value lies. Contributors to this "potential for inaccuracy" include the performance of the equipment used to make the measurement, the test process or technique itself and environmental effects. Additional imprecision may result from behavior of the phenomenon or item being measured. A skilled Metrologist will assess and combine these various components in an uncertainty budget. To prove that a product complies with specification (or doesn't), the uncertainty must be less than the unknown's specification.

It's generally considered good practice to use test equipment and techniques whose combined uncertainty is 3 to 10 times smaller than the specification of the unit under test -- see concept of traceability -- which represents the test accuracy ratio: i.e. TAR = Spec/Unc.
Of course, the higher the TAR the better, but higher performance test gear or extended test times for averaging, for instance, costs more and the pursuit of an excessively high TAR is cautioned against.

Particularly employed when TARs are low, this is a safety margin having the purpose of tightening an acceptance (pass) limit when testing a specified product. The guardband limit might, simplistically, be set at a point equal to the specification minus the uncertainty but is often "tuned" to recreate the same confidence that would result from using a 4:1 TAR with the acceptance point set at the spec. limits. Guardbands are also employed in manufacturing where routine testing may only be a subset of the product's full (customer) spec. in extent or environment, yet assurance of compliant shipments is desired.


© 2009 Coastal Calibration Laboratories, Inc. All Rights Reserved