WELCOME TO

THE INDUSTRIAL WIKI

RESEARCH CONTENT IN GREATER DETAIL
Talk Page

PRECISION MEASURING INSTRUMENTS

Print
Email
Save
Social

All modern manufacturing requires the use of gauges. Without gauges, it would be impossible to order a part for a sewing machine, a bicycle, an automobile, or any other machine and be sure it would fit. Gauges make it possible for a technician to know when the piece he has made is right, and in-line gauges prevent parts that are not right from going to the assembly line and to the buyer.

Menu

The History of Gauging

Gauging (precision measurement) had its origins when humans began to develop products, goods, and services to meet life's needs. It was soon recognized that established rules must be set to ensure fit and function of products. That is to say, some type of dimensional control would have to be established to allow component parts to fit correctly during assembly and function properly when installed and subsequently tested. In addition to the concept of fit and function was the basic idea of interchangeability of parts and components. It became clear that standards needed to be established to ensure proper duplication of components.

The need for ensuring fit, function, and interchangeability is apparent when envisioning a relatively simple component such as a valve. A valve manufacturer who performs practically all processing and assembly of parts of a valve still relies on several other vendors for raw materials and certain parts. Frequently, some of the processing steps shown for the valve manufacturer are subcontracted to additional vendors. Dimensional control is absolutely necessary if all the parts are to fit at assembly and function properly at test and after installation.

Gauges may be simple or very complicated, but the purpose of gauges and how they are used is simple and easily understood if the principles involved are made clear. Good gauges are expensive to make, but they save their cost many times over when they are used properly. By preventing parts which will not fit from reaching the men who assemble machines, they save much time and make it unnecessary for the assemblers to make the different parts fit by putting additional work into them. Until recently, the proper use of gauges made it easy for the workman to turn out work that was as accurate as was required. Modern industry however, is increasingly confronted with the demand by design engineers to produce to ever-tighter tolerances. This perceived need is related to customer requirements for improved product quality, reliability and durability. It is further fostered by overseas competition, in particular the Japanese, who are achieving such minimal variation about the true value, that the variation is difficult to detect with older gauging techniques.

The term precision measurement is applied to the measurement field beyond the scope of line graduated, non precision measuring instruments, such as the rule and scale. It refers to the art of reproducing and controlling dimensions expressed in thousandths of an inch or millimeter.

This illustration of interfacing dimensions and the concept of inter-changeability show that gauging is absolutely necessary to prevent hundreds of thousands of dollars in repair or replacement costs.

Development of Gauges

Early in the 20th century, product tolerances for metal cutting were generally of the order of 0.005 to 0.010 inches, or about 0.10 to 0.25 millimeters. With such tolerances, the fixed-limit gauges, despite their error of "feel" and despite the purely "good or bad" information they provided, were an adequate, inexpensive, and swift means for product inspection. Consequently, the majority of gauges were of the fixed-limit type. While variable gauges were available (i.e., dial gauges, verniers, etc.), the fixed-limit gauges dominated gauge usage and the gauge budgets. Still more precise measurement forms were also available, but these were usually carried out in the precision laboratories rather than on the shop floor.

Within several decades, the usual tolerances for metal cutting had been reduced by an older of magnitude or more, making the fixed limit gauge largely obsolete due to the high error of "feel" in relation to the new level of tolerances and the inadequacy of "good or bad" information for purposes of process control.

New Technological Principles

The early methods for measuring length all employed mechanical principles. To measure the numerous special configurations, e.g., inside diameters, depths, tapers, etc., many special tools were developed: surface plates, scales, verniers, micrometers, dial mechanisms, amplification linkages, and gauge blocks. These were brought to higher and higher levels of mechanical precision. Finally, the economics of continuing to improve precision through extension of mechanical principles reached its limit, and it became necessary to use other principles; mainly electronic, pneumatic, and optical.

Electronic Measurement

The usual form of electronic measurement is built around a balanced Wheatstone Bridge circuit. The gauging head or probe is used as a comparator, first resting on a known buildup of gauge blocks and then on the specimen. The difference in vertical positions of this gauging head actuates a linear variable ¼ differential transformer which, in turn, unbalances the Wheatstone Bridge. The amount of unbalance can be amplified by various orders of magnitude and then read on the scale, which is calibrated in units of length. Amplifications can exceed 5,000X.

Pneumatic Measurement

The pneumatic family of instruments operates on the principle that the volume of air flowing through a gap varies with the size of the gap. In some configurations, this variation is virtually linear and, hence, permits ready calibration. The amplification factor of these systems is surprisingly high, extending comfortably to over 10,000X; i.e., a product variation of 0.001 inches (0.0254 millimeters) would show up on the scale as 10 inches (or 254 millimeters).

Optical Measurement

There are various ways of using optical magnification for measuring length. The most precise makes use of the well-known phenomenon of interference fringes, resulting when waves of light in the visible spectrum are alternately in phase and out of phase. A count of these fringes becomes a count of wavelengths of light. These distances are tiny. There are over a million wavelengths of visible light in each meter. The resulting precision permits magnifications of over a million in measuring with interference fringes.

Gauging Fundamentals

Fundamental concepts and terms are used within any discipline. In gauging, the primary concepts are standards, accuracy, and precision. The secondary concepts are readability, sensitivity, and linearity. Standards were previously introduced. Before introducing the other five terms, the term measurement is defined and illustrated and the fundamental design of measuring is reviewed.

Measurement

Measurement is "the act or process of ascertaining the dimensions of something by comparing its dimensions to an accepted standard using devices designed for that type of measurement." This means that, in order for measurement to occur, the measuring device used to gauge the feature must be compared to a standard. In this manner, the measurement result has validity because the accuracy of the measuring device has been determined.

It is obvious that the ability to measure accurately and precisely must improve at a rate at least commensurate with the ability to manufacture with greater precision. To comply with government and industry standards, most gauging equipment accuracy should be traceable to the National Bureau of Standards.

Fundamental Design of Measuring Devices

All measuring instruments or devices have three common design features: sensor, transducer, and readout. Depending on the specific measuring device, the same part may serve in multiple capacities or individual parts may exist for each of these three components.

Sensors

Sensors are the elements of instruments that detect the units to be measured. The contacts of a micrometer are considered as the sensor in that the spacing of the contacts detects the length units being measured. Tracers of surface roughness measuring equipment are sensors of surface topography in that these tracers follow the surface contour.

Transducers

A transducer is a device that responds to a given parameter to be measured by producing a signal that is related to one or more variables of that parameter. Such devices vary from the delicate and reliable transducers of space vehicle telemetry systems to the simple air jets for measuring cylinder bores.

Readout

Readout is the indication of the measured feature in the desired units. Readout may be by direct indicating, digital, or recording.

Figure 2 shows the sensor, transducer, and readout of a measuring device. Note the following features of this measuring device:

  • The anvil is the sensor
  • The internal lead screw functions as the transducer
  • The thimble functions as the readout

Units

People use units throughout their lives without necessarily realizing it. For example, if a person wished to meet a friend at a certain place and a given time, it would be confusing if units were not used in the conversation. For example, one might say I will see you in 2 at the restaurant which is 25 up the road. Although one may guess what the units might be, there would still be some doubt. For example, is the meeting to be held in 2 minutes or 2 hours? Is the restaurant 25 feet or 25 miles up the road? These clarifying terms are called units.

A unit is "a determined quantity that has been adopted as a standard of measurement." It is used to inform us of how much of a quantity we are working with. Examples of units are feet, inches, centimeters, pounds, kilograms, gallons, grams, and seconds. In general, a unit is fixed by definition and is independent of physical conditions such as temperature and pressure.

Primary Gauging Concepts

In gauging, the primary concepts are standards, accuracy, and precision.

Standards

A standard is "the unit of reference by which a given parameter is measured." For example, in the SI system, length is measured in meters, mass in kilograms, and time in seconds. The purpose of standards is to provide a common reference by which all basic physical parameters, such as length, mass, or time, can be compared.

The US National Bureau of Standards was established by an act of Congress in 1901 to serve as a national scientific laboratory in the physical sciences and to provide fundamental measurement standards for science and industry. From the British System of Weights and Measures, the US developed a system of weights and measures that contains both equalities and inequalities with the British system. In both systems, length measurements are identical, e.g., 12 inches = 1 foot, 3 feet = 1 yard, and 1,760 yards = 1 international mile. However, differences exist with respect to other measures such as dry volume (bushel) and liquid volume (gallon). The US National Bureau of Standards establishes the definition of units within the US System of Weights and Measures and maintains conversion factors to convert any US units to their corresponding British and Sl units.

Hierarchy of Standards of Units

One does not compare all measurements with the primary standard. It is totally impractical to compare all length measurements with the distance traveled by light in a vacuum during 1/299,792,458 of a second. It is much more practical for each organization involved in measurements to possess or have ready access to other suitable standards. Accordingly, a hierarchy of standards has evolved as shown in Figure 3 that includes four divisions.

Primary Reference Standards are those maintained by the US National Bureau of Standards. These standards consist of a duplicate copy of the International Meter plus measuring systems that are responsive to the definitions of other standard units. These standards serve as the common reference for measurements within the United States for both the Sl and US System of Weights and Measures and, hence, are the primary reference standards.

Transfer Standards are those maintained by industry for the sole purpose of "transferring" accuracy of measurement to the next lower level of standards in the hierarchy. The National Bureau of Standards compares transfer standards with the defined unit of measure. Such a "calibrated" transfer standard may then be used to determine accuracy of other transfer standards of the same fundamental unit of measure?�?�ment. Economics governs the number of transfer standards in existence. It should be stressed that transfer standards are ordinarily used for calibration and not for actually make measurements of product.

Working Standards are those possessed by or readily available to every organization that makes measurements of product. The working standards are compared to the transfer standard to determine their accuracy. Working standards can be used to making measurements if a high degree of accuracy is required.

Measuring Devices are used to take measurements. The accuracy of these devices is known by their comparison with the working standard.

Thus, the total hierarchy offers the provision for determining the ability of a measuring device to take a measurement in units embodied by the primary reference standard (e.g., national standard). The hierarchy permits billions of measuring devices to be compared economically with the national standard.

Standards by Adoption

Written standards in use in the United States are developed by government agencies, professional societies, or associations representing using industries. Those developed by professional societies and user associations are frequently adopted by the National Standards Institute as a national standard. These standards may also be made mandatory for use by governmental statute or regulation. In any event, when two or more using parties agree to comply with written criteria, a standard has been adopted.

Some examples of written standards pertaining to measurement are as follows:

  • Military Standard 105D, Sampling Procedures and Tables for Inspection by Attributes - This standard defines criteria and procedures for use of statistical sampling techniques.
  • Department of Commerce Handbook H-28, Screw Thread Standards for Federal Services - This standard defines classes of threaded fasteners and their precision.
  • Department of Commerce GGG G 15, Gauge Blocks and Their Accessories - This standard defines classes of gauge blocks and their accuracies.
  • American National Standards Institute (ANSI) Y-14.5, Dimensioning and Tolerancing - This standard defines methods of designating form features on drawings and the interpretation of such designations.

Accuracy

Accuracy is defined as "the agreement of a measurement with the true value of the measurement quantity." Accuracy is the agreement of the recorded value with the value that would have been obtained had the dimension been directly compared to the primary reference standard. Recall that the primary reference standard was the most accurate technique for measurement of a given parameter. Therefore, in gauging, one is attempting to reference the accuracy of a given measurement back to this primary standard or "most accurate" measurement.

Considering only the accuracy of the measuring device and intermediate standards, the accuracy could be determined if the accuracy of each intermediate standard were known. However, this would not account for human accuracy in using and reading the measuring device nor other factors such as temperature effects. Sources of error affecting accuracy are described later in this chapter.

Accuracy of measurement is essential to the ability to ensure fit, function, and interchangeability. Two principles for achieving accuracy are:

  • The more alike two things are, the more accurately they can be compared.
  • the operations performed on the standard and the unknown must be as identical as feasible.

The first principle involving the accuracy of measurements could be demonstrated by comparing a meterstick to a yardstick of the same accuracy. Measurements involving these two devices require the use of conversion factors that introduce another source of error. It would be easier to compare one meterstick to another meterstick or a yardstick to another yardstick.

The second principle involving the accuracy of measurement advises that to achieve grater accuracy in a measurement, the same technique and the same environment should be employed when comparing a standard and an unknown.

Precision

Precision is the repeatability of the measuring process. If the width of a vee block were measured ten times with a wood rule and then again with a steel rule, it will probably result in the recording of different values for the measured dimension. While accuracy of the measurement contributed to the results in that case, if the same individual uses the same measuring device to repeatedly measure the same dimension, a scatter of results is likely. Precision determines how closely identical values are obtained when repeating the same measurement at various intervals, or duplicating them by different instruments.

Inherent features of the measuring device and the measurement technique and materials are the main factors affecting precision. Each measuring device has its own precision. The precision of the measurement technique (procedure) is affected by human factors, the use of different devices or materials, and environmental changes.

Repeatability

Repeatability is a measure of the ability of an instrument to produce the same indication (or measured value) when sequentially sensing the same quantity under similar measurement conditions. The specification of similar measurement conditions must be such that all known systematic influences on the measurements process (environmental, procedural, operator, etc.) are controlled, independently measured, or explicitly included as part of the repeatability measure for a given measuring instrument.

Reproducibility

The variation in measurement averages when more than one person measures the same dimension or characteristic, using the same measuring instrument.

Stability

The variation in the measurement averages when the measuring instrument values are recorded over a specified time interval.

Secondary Gauging Concepts

The secondary gauging concepts are just as important as the primary concepts. However, these concepts are more directly related to a measuring instrument's ability to measure to a standard with accuracy and precision. These secondary concepts are readability, sensitivity, and linearity.

Readability

Readability is the susceptibility of a measuring device to having its indications (readout) converted to a meaningful number. For example, a worn out ruler whose graduations are difficult to read could not be used effectively in making an accurate measurement. Similarly, a ruler that is graduated in one-eighths of an inch may be difficult to use when trying to read to the nearest one tenth of an inch. Likewise, attempts to graduate a ruler in extremely fine increments, e.g., 0.010" or less, makes the ruler unreadable. The term readability is sometimes used interchangeably with discrimination. However, discrimination means the increments of the least significant unit on a measuring device.  One example is a scale graduated to the nearest one-tenth of an inch. This scale could not be used to make measurements to the nearest one-hundredth of an inch because it would not have sufficient discrimination. Readability refers to the ability of the user to read to the smallest unit on the measuring device using specified inspection procedures. Line-graduated measuring devices that have very fine discrimination may not be very readable.

Four factors affecting the readability of a measuring device are:

  1. The skill of the user
  1. The clarity of the readout on the instrument
  1. The type of readout, e.g., scale, numeric display, digital
    1. The level of lighting

    The lower scale in Figure 4 discriminates to 1/64 of an inch while the upper scale can only reliably discriminate to 1/16 of an inch. The readability of the upper scale is greater than that of the lower scale, i.e., it is easier to see the divisions.

    Readability affects the selection of the measuring device to be used. Tolerance of the dimension to be measured governs such selection.

Tolerance

Tolerance is defined as ?ó??the range or limit of measurement values that are acceptable to the specified standard." For example, if the tolerance of the feature of a part is plus or minus 0.125 inch, an ordinary scale or steel rule is suitable. If that same feature has a tolerance of plus or minus 0.005 inch, a vernier with a readout in increments of 0.001 should be considered.

Tolerance-to-Readability Ratio

Tolerance-to-readability ratio determines the selection of a measuring device for a specific application. There are two rules with respect to tolerance-to-readability ratio. One is to select instruments having at least a 10 to 1 ratio and the other is to select instruments having at least a 4 to 1 ratio.

While the 10 to 1 rule is more conservative, it frequently becomes impractical when calibrating instruments. When the 10 to 1 rule is applied to the hierarchy of standards in the calibration process, the "state-of-the art" of inspection capability is often exceeded; hence, the 4 to 1 rule is more practical. For fixed limit tolerances, readability becomes a factor as the limit is approached. Examples of both rules for selection of measuring devices are given in Table 1.