Accuracy and precision

Precision and accuracy are two criteria used to assess a measurement in metrology.

The precision is a criterion of the quality of a measurement procedure. It is therefore also referred to as internal accuracy of a measurement.

  • It quantifies the precision of a method by oftmaliges Repeat the measurement under the same circumstances with the same measuring instrument or measuring system and reduction of the individual sets of results according to normalized algorithms of fault and compensation calculation. A very precise method provides for the same task in each case nearly identical sets of results. The precision makes no claims as to how far the individual measured values ​​are each removed from the real default, but it describes the stability of the instrument, including its reading during the measurement.

In comparison, accuracy means two things:

  • An outer Accuracy: It comes in the scatter of the mean values ​​of the measured values ​​expressed when they are repeatedly measured, and thereby subject to natural uncontrollable variations.
  • An absolute accuracy, which is the degree of agreement between the indicated and true value.

To delineate the terms accuracy and precision against each other makes sense, it is first noted that high precision in technical contexts is a necessary but not sufficient condition for high accuracy. This relationship, which arises as a logical consequence of the above given short definitions is better accommodated by the outdated Wortgebung " repeatability " for expression: precision is limited to the aspect of reproducibility part of precision. So can run with high precision technical operations have still imprecise, but conversely, an imprecise technique produce accurate results; see also the definition of precision according to DIN.

  • 2.1 Calibrating 2.1.1 Example calibration standard
  • 2.2.1 Precision example: linear axis of a machine tool
  • 2.2.2 Precision on Example: Milling a circular groove with XY table
  • 2.2.3 context, the precision with specific errors
  • 3.1 Example: fender

Terms

Precision

The meaning of the term precision is defined in DIN 55350-13 by pointing to the importance of equality for the formerly called repeatability feature.

The term " repeatability " should no longer be used in the field of standardization. However, the repeatability is not only in older literature, but also in new texts often quantified as a quality of measuring instruments and even entire production machines and occupies the instrumentation errors.

Outer accuracy

The external accuracy is the one share of the measurement error, which is not part of the meter error ( the precision of the measuring device ).

Absolute accuracy

In EN 60051, the accuracy of a measuring instrument is defined as

" Degree of correspondence between the displayed and correct value. The accuracy [ ... ] is determined by the limits of intrinsic error and the limits of influence effects. "

In this context, it is used to define the accuracy class.

Application

Details for interpretation of independence (from investigation results ), the definition ( of the process ) and specifications ( subject to conditions ) restrict users of the term precision only meaningful measure of a. This results in two main areas of use of precision as a quality feature: sizing equipment and robotics.

Sizing

Compared to the absolute accuracy of a measuring instrument, the precision does play a minor role in the first evaluation of the quality of the device. As long as it is possible, a more accurate measurement methods - than in its quality to be assessed - to use for direct comparison, you will carry out is usually a calibration to absolute values ​​with the quality of the device. This means calibrate: A measuring device of a better measure (see: Normal).

A measuring device can calibrate a meter if its absolute accuracy throughout the measuring range is at least an order of magnitude better than that of the specimen.

The currently most accurate state of the art equipment can not be calibrated itself, as a required for this purpose, much more accurate measuring device do not exist. Therefore, no quantification of its greatest absolute measurement error can these instruments be assigned with the highest technically achievable accuracy.

However, it is possible with the determination of the precision in these cases, yet to gain valuable information about the reliability of the results of such machines. If repeatedly ( independent investigation results ) the same measurement task ( procedures are established and conditions specified ) is executed, each corresponding to one another variations of this independent investigation results can be offset by a predetermined scheme. The result of this procedure is the precision of the new calibration machine.

For calibrated devices, ie, those with less high level of precision, repeatability is the measurement of a chance to improve represents a manufacturer of such instruments can learn from it about systematic errors of its design and incorporate them into the development. The repeatability is mainly indicated for encoders, linear encoders, angular and high precision.

Example calibration standard

For primary calibration devices even though the repeat plays an essential role, as they are self calibration standards and therefore do not (or only in a few exceptional cases ) can be calibrated. To determine the repeatability of the measurement range is repeatedly run through under the same conditions as possible. The resulting measurement charts are now compared. To clearly delineate such " comparative measurements with themselves" absolute calibrations, the term is " repeatability " - in the sense of precision - essential especially in the area of the measuring systems of the highest precision and highest absolute accuracy. The repeatability can be better than the absolute accuracy. But the absolute accuracy can be immeasurable at the time of quality determination in the context of technical development. The difference is primarily due to that very principle for well-functioning gauges the individual measurement results are usually scattered around as Gaussian distributions to the actual physical value symmetrical. In special cases, however, a systematic error of a measuring instrument to focus the absolute measurement error in some areas of the entire measurement range, while the remaining regions measured with much higher accuracy. The absence of such systematic errors, arising from the limits of technical feasibility, as is typical for those instruments for calibration, then two consecutive measurement series deliver nearly identical diagrams, but which in absolute terms have typical error in each case the same spectral regions.

If there is no possibility of proving a calibrator by a slightly better measurement errors, then the repeatability is initially the only clue for at least a rough estimate of how well the measurement method or the method of measurement could be.

The relevant measurement standards in aspects of precision measuring machines are at the PTB particularly clearly: Of great importance an additional control all investigation results of a calibration tool by comparison with the corresponding results of investigations is at least one non- identical precision machine for the same application.

Robotics

In the industrial production mainly automated production are used, the value obviously it follows that they meet under specified conditions ( production hall, materials, shapes, ...) as often and in quick succession (independence ) one and the same production task as identical as possible. These are production with many instruments - particularly for lengths and angles - equipped.

The precision of the individual measuring instruments which serve to bring information about the processing room in a CNC billable form, may be inherited in this case successful construction of the entire system. To evaluate the overall result, the precision of a complete production plant is determined exactly as the precision of a calibration machine.

Precision example: linear axis of a machine tool

Repeatability describes the context of a linear axis, an automatic production line, the reproducibility of a measured value for the position in this axis under the same conditions for the determination of this value.

Be the manufacturing plant, a machine with three linear axes X, Y and Z. The accuracy of the X - axis can be measured by comparing the positions are approached in which the Y- and Z- axis are to be held during the investigation. Then, the framework (temperature, lubrication condition, ... ) can be set and the process is (defined by particulars as to the beginning of the test section, the measured deflection, to the direction of approach, to traverse, ...) several times consecutively. The calculation results are finally charged according to DIN. Since it is a positioning task in this example, the result for the precision a value in the physical unit of measurement meters, due to the standards for accuracy of production given in micrometers.

Often, however, has a relative indication of more significance. This one wins in the example, by dividing the calculated, absolute repeatability due to each measurement path traversed. Specifying this percentage precision allowed in certain dimensions comparing even quite different instruments.

Precision at Example: Milling a circular groove with XY table

The importance of precision for the production technology can be represented very well in the simplified example of a relatively trivial processing by means of two linear axes of a CNC machine: a controller contains the representation of a CNC program, all relevant data for the production of a particular workpiece ( without limitation loss of generality ) to be milled from a blank. The program describes the desired shape ( geometry) and applied at every moment of the milling process equipment (cutters, feed rate, spindle speed, gear stages, use of coolant as well as many other manufacturing details). For example, to a circular groove with the circle center coordinates ( X = 0, Y = 0) to mill in the machining plane, the ( front-back) is defined by the two machine axes X ( left-right) and Y, have the motion sequences these two linear axes are interpolated together. In this specific example, a circular groove of the track center radius R 'll cut a path feed F. The cutter center path has the length of the circumference of the circle U. The production time T for a circular path of the groove is V / F. The interpolator shapes the X-axis at each time t during the machining movement Rsin (t / T ) and, synchronously therewith, the Y-axis Rcos (t / T). The starting point is (X = 0, Y = -R ). X start (t = 0), that is, with the speed F ( acceleration phase neglected) and moves with decreasing speed, the distance R to the right. When the time arrived there t = T / 4 with speed 0, the X - axis starts in the opposite direction with increasing speed, happens with F the origin ( X = 0) and then slows down at the left turning point, they claimed 3T / 4 with speed reaches 0. The remaining quarter of the processing time T goes back to X to F anwachsender speed right up to the origin (X = 0). The Y- axis starts (t ​​= 0) at -R before the origin and moves out of the state during T / 4 to F increasing speed to the origin (Y = 0) slows down there the rear point of reversal at Y = R and moves during T / 2 to T in the opposite direction to the front according to according to the second half-cycle of the cosine. During the whole time T, the respective position values ​​X (t ) and Y ( t) be continuously measured, to recognize them as control variables in the loop condition of the axes. High precision of the milling machine in this context means that all circular grooves are milled to her in the same way one after the other, are as identical as possible. This goal can only be achieved if the data represents the systems for measuring positions at least bijective maps of the actual positions of the linear axes. Precise orientation of the two axes X and Y is thus a necessary requirement for the precision of the whole machine.

Context, the precision with specific errors

The metric dimensioned geometry program the control of an otherwise ideal machine allow the change of the system of measurements in inches, but used as a conversion factor falsely s = 25 instead of 25.4 [1 inch = 25.4 mm]. When programming the circle of radius R in inches all calculated lengths are compressed uniformly by a factor of 125/127 against the imperial display. It creates ideal circular grooves, but each with around 1.6 % to small radius and circumference.

  • This systematic error of the geometry statement does not affect the precision of the machine. The results are the programmed workpiece geometrically similar and identical to each other, so perfectly reproducible. Optimal precision = 0

When addressing the geometric error in the CNC software, the corrected factor s but not the Y- axis (s ( Y) = 1) = 25.4 accidentally netted only the X- axis. This all X positions are approached correctly, but upset all Y - lengths by the factor 1 / s = 5/127. It caused a loud identical elliptical grooves, not circles.

  • This bias does not affect the calculation of the geometry precision of the machine, even though the results of the processing are not even more geometrically similar to the programmed work. Precision = 0 ( ideal).

Ballscrews transmitted in most real linear axes, the rotations of the servo drives to the processing table. When changing direction the slack caused too long a persistence of the axis at the reversal point and leads to the typical reversal spikes at the quadrant transitions ( X = 0) and (Y = 0), the actual path being traversed. If the translational axis positions determined from the angle data of a high-resolution rotary encoder on the drive spindle, then there exists in the reversal of direction on the route of the lots no bijective mapping between the true values ​​of the table position and the measured angles: The servo drive the spindle rotates, the encoder measures the angle of rotation and the controller calculates to positions and position changes, although the table is in this axis. The length measurement in this example the precision of the encoder does not inherit, because the precision of the length measurement and the dominating error contributions from the leadscrew and coupling (stretching the drive belt, spindle lots and other effects ) between rotation ( servo drive, ball screw ) and translation ( linear axis of the machining table ) contains.

  • The reverse fault by a spindle lots deteriorates the precision of a linear axis with precision encoder as sensor to determine the position.

High bias voltage at the screw to reduce the slack, thereby improving precision. However, at the price of increased friction and consequent heating of the management components. A to be heated ball screw changes its length, but not the number of turns; therefore change the temperature fluctuations locally and temporarily occurring his leadscrew. The precision is deteriorated thereby.

  • The thermal dependence of the spindle pitch of the ball screw deteriorates the precision of a linear axis with precision encoder as sensor to determine the position.

A linear scale which is fixed to the treatment table can measure the orientation of the axis associated with unaffected by spindle lots and leadscrew. The precision of such a length measuring device can therefore pass on to the linear axis as a whole component of the machine.

  • Spindle lots and thermally dependent spindle pitch does not affect the precision of the position determination by a linear encoder.

Problem of precision data

Since the repeatability of devices with systematic construction error of the definition can be almost as high as desired, without this information on the quality in the absolute accuracy can be derived directly, details are also always be considered critical. Only if the accuracy of a device is in itself undoubtedly expressed the indication of the precision delivers a welcome and important criterion for deciding on the suitability of the device for the task to be solved.

Economic importance of precision

From the standpoint of science considered precision is the precision clearly subordinate quality feature (one meter ), since measurements must represent an accurate possible representation of reality in scientific experiments to be useful as tests of hypotheses. From the standpoint of production of goods from a machine pays for primarily by their precision. The difference between these perspectives lies in the fact that in production costs compared to its benefits more weight will be given, as in science. A machine that can produce fits absolutely precise fast and cheap, all do not fit in exactly the same way is a good machine. Because one can teach her exactly as much right not to manufacture fittings that for Misfits inappropriate fits are now adjusting to the original task. In principle, therefore, the basic recipe is described, many systematic machine errors are rendered harmless by algorithms for error compensation in CNC - controlled production with today. In practice, machines are very strict calculated as capital goods.

Example: fender

The integration of the fender to the body of an automobile has become possible to be sufficiently precise machines were at acceptable capital costs available to press the respective wings fit to the recess in the body. The tolerances occurring one compensated by Handpicked and closing the gap by elastic piping. By increasing the precision production machines could soon be producing fenders so precise that the piping was no longer needed and the wings could be welded directly. This well-known example illustrates the essential connection of an industrial economy and precision: In automated manufacturing processes correlated high precision production machines directly with high productivity.

25783
de