Depth of field

The depth of field is a measure of the extent of the sharp area in the object space of an imaging optical system. The concept plays a central role in photography and describes the size of the distance range within which an object sufficiently appears sharp in the image of the camera optics. In general, a large depth of field is achieved through small apertures or lenses with short focal lengths: from front to back then everything looks more or less sharply from. The opposite is the so-called " film look ", in which the range of the depth of field is small (English: shallow ): The camera records the central figure sharp, sometimes for only a person's eye, while everything is out of focus in front and behind her. Deep depth of field means at the depth of the space, ie the direction away from the optic. In computer animation, the depth of field is an optical effect that is taken into account in arrears in each image and therefore means considerable computational effort. Most here the English term Depth of Field ( DOF) will be used.

Colloquially, depth of focus and depth of field are used interchangeably, even from a linguistic point of view, both terms have the same meaning. Because of a technically and practically photo absolutely different meaning of the term " depth of field " was first standardized in 1970 (DIN 19 040 Sheet 3). As a result dominated competent technical literature, the term " depth of field " for the expansion of distance zones with sufficient sharpness in the object space, and occasionally " depth of field " in special applications the depth is assigned (intensity) of focus, for which, however mainly other, better defining terms are preferred.

  • 2.1 Hyperfocal Distance
  • 2.2 nearpoint
  • 2.3 farthest point
  • 2.4 depth of field 2.4.1 approximations
  • 2.4.2 dependencies
  • 5.1 Screen design with depth of field
  • 5.2 Factors influencing the depth of field
  • 5.3 Camera Settings

Geometric depth of field

There are basically distinguish two different arrangements: the camera obscura, which consists only of a single pinhole and a lens system which contains as a stop also, but additionally ( at least) one lens (before or visor behind ), a produces regular optical imaging.

Camera obscura

From an object outgoing light rays fall through the pinhole to the image plane ( a screen, a movie or a camera's image sensor ). Depending on the diameter of the aperture are from these light rays more or less thick cone- shaped light body; with a cone is formed by the intersection of the image plane at the level of a circuit, called the blur circle or circles of confusion (Z). They exist in every dimension of the distances between the object, aperture and image, the size of the circle in the image plane is calculated according to the theorem. The influence of the pinhole diameter is simply proportional: the larger the hole, the greater the blur circle. For a sharper picture a smaller hole is required. However, if the hole is too greatly reduced, so the range of geometrical optics will leave and there are the wave properties of light in the foreground. The diffraction effects occurring are greater, the smaller the hole. This leads to a decrease in the sharpness. Thus there is an optimum hole diameter for a camera obscura. Further in addition to the image-forming properties, the fact must be considered that decreases with a smaller hole diameter of the luminous flux, the exposure time increases in this optimization.

Lens system

The structure with an additional lens changes on principle only so much that the lens ensures that ( in the ideal case ) a sharp image occurs at a certain distance of the image plane of the lens, so at this position eliminates the above inaccuracy (and the aperture can be increased significantly in the interest of better light yield). Only when it comes to object points that lie ahead or behind this sharply defined position reduces this sharpness and decreases with increasing distance to the value that the diaphragm would cause alone as a camera obscura. More precisely:

In geometrical optics, only those points may be represented as a sharp image points in the image plane (film, chip), which lie on the plane which is located in the object distance of the lens. All other points that are located on either closer or more distant planes appear in the image plane not more than points, but as a slice, called the blur circle or circles of confusion (Z).

Circles of confusion arise because of the lens ( lens) onto the image plane ( the film ) light falling bodies are tapered; a cone is formed by intersection of the image plane to the plane of a circle.

Points closely spaced that do not lie in the object plane are imaged by closely spaced circles of confusion, which overlap and mix in the peripheral areas, creating a blurred image.

The focus for the acceptance of maximum tolerable circle of confusion diameter for a camera is denoted by Z. The absolute magnitude of the maximum circle of confusion Z is dependent on the recording format because it is 1/ 1500 of the diagonals. As long as the circles of confusion are not greater than Z, they are below the resolution limit of the eye, and the image is as sharp considered. Here, the impression the image as not only a focal plane, but a focus area on results. The problem a limited depth of field is also when the focus is not measured directly in the image plane, but with separate shims or sharpness sensors because it then can easily lead to focus errors due to tolerances in the image distance.

The following table illustrates the maximum size of the blur circles depending on the recording format of the respective photographic apparatus:

Calculating depth of

The following variables are required:

  • The lens focal length, for example, 7.2 mm
  • The f-number (also called working aperture ), for example 5.6
  • The object distance (distance of the focused object plane from the front principal plane), for example, 1000 mm
  • The circle of confusion, for example, 0.006 mm.

For an approach to the following formula can be used as the format diagonal of the recording format in mm and as a number of distinctive points along the diagonal with:

This approximation is based on the assumption that the human eye can resolve a maximum of 1500 points on the diagonal when the viewing distance is approximately equal to the image diagonal. For technical applications with higher image resolution must be chosen to be significantly higher if necessary.

Hyperfocal Distance

To determine the depth of the hyperfocal distance is calculated from the lens center from first:

Since the left-hand summands are usually much larger than the right, the right term in the corresponding approximation formulas can be neglected:

Near point

Then, the distance can be calculated from the lens center to the near point:

Further, when the object distance set to an imaging system to the hyperfocal distance, that is, with the condition obtained exactly:

Thus, the near point is at half the hyperfocal distance, and in this case objects of infinite half to hyperfocal distance are mapped sufficiently sharp.

For large object distances compared to the focal length, the focal length in the differences can be neglected, and the result is:

Far point

Similarly, the distance can be calculated from the lens center to the remote point:

For sufficiently large object distances compared to the focal length, the focal length in the differences can be neglected in this case, and there is accordingly:

Depth of field

The depth of field extends from the nearest point to the farthest point with

When the set object distance is greater than or equal to the hyperfocal distance (), then the depth of field is infinite, because the far point is at infinity then.

When the set object distance of the focal length is equal to ( ), then the depth of field is zero, since the far point and near point are identical; the picture is then at infinity. During macro shooting with correspondingly large magnifications is therefore usually result in very small depths of field.

Approximations

Neglecting the focal length, ie, with and, is obtained for the depth of field of approximately:

When the object distance is set to the N-th part of the hyperfocal distance, ie with

Then increases the depth of field is approximately proportional to the square of:

Dependencies

From the approximation formula for the hyperfocal distance can be easily read that these increases and the depth of field thus decreases as the focal length increases, the f-number is smaller ( respectively the aperture larger) or the circle of confusion should be smaller.

The dependence between hyperfocal distance and used screen size can be easily estimated with the following consideration: The focal length is the same as the image width, so that the focal length for large object distances approximated as follows depending on the screen sizes for infinitely large object distances:

Wherein the desired angle of view, which is decisive for the perspective image effect. Substituting this approximation into the approximate equation for the hyperfocal distance a, we have:

This means that the hyperfocal distance increases linearly with screen sizes when the f-number, the number of pixels on the image diagonal and the angle are kept constant. Similarly, the formula can be read that the depth of field is smaller, the smaller the f-number or the angle of view; So wide-angle lenses have other things being equal a larger depth of field than telephoto lenses, or the hyperfocal distance is smaller with wide angle lenses than telephoto lenses.

It can also be noted that the depth of field at a constant ratio of image sensor diagonal and f-number at the same angle and the same number of acceptable blur circle is always the same.

As myopia

When the eye of a normal or farsighted is in focus to the hyperfocal distance, the area of half the hyperfocal distance to infinity is mapped and perceived sufficiently sharp. It is different with short-sighted, which can focus only up to a maximum distance due to their short-sightedness and the hyperfocal distance, therefore, often can not be achieved.

For the calculation of a normal refractive power of the eye of 59 diopters was adopted. This results in a normal focal length of 16.9 mm and an image circle diameter of 14.6 millimeters. If it is assumed for the number of points on the screen sizes 1500, then the diameter of the acceptable circle of confusion is 9.74 micrometers. In uncorrected myopia, the eye can focus only on a maximum object distance that results using the imaging equation from the actual power which is usually given as a negative diopter difference:

The following table lists the depths of field can be exemplified for three different light situations respectively f-numbers for the eye:

  • F-number: dilated pupil (diameter = 4.2 mm in dark surroundings )
  • F-number: mean pupil ( diameter = 2.1 mm in average ambient )
  • The aperture: small pupil ( diameter = 1.1 mm in bright environments )

If the farthest point reached infinity, the eye is focused at the hyperfocal distance and it is no longer necessary nor into focus greater distances for sharp vision.

Wave-optical depth

All optical images are limited by diffraction, so that a single point never to a point, but only to an Airy disk (or Airy disk ) can be mapped. The selectivity of two adjacent Airy discs defined by analogy with photographic film a maximum permissible circle of confusion. According to the Rayleigh criterion, the intensity between two adjacent pixels should drop by 20 percent in order to qualify as sharp. The size of the diffraction disk is dependent on the wavelength of light. One defines the Rayleigh depth of field as the area within which the picture size does not change, that is constant corresponds to the smallest possible (ie diffraction-limited ) value:

Here is the wavelength, n the refractive index u and the aperture of the imaging system.

The Rayleigh depth of field is relevant for diffraction- limited optical systems, for example in microscopy or in photolithography. In photography, a wave-optical blur beyond the beneficial aperture makes visual effect noticeable.

This is the maximum permissible circle of confusion, the image scale and the wavelength.

For usual applications (small image scale ) in the small picture photography results in a beneficial aperture of about f/32, so that diffraction except in macro photography hardly plays a role.

Since the small sensors modern compact digital cameras but require very small allowable blur circles, moves into the range of usual f-numbers. For a 1/1, 8 " sensor, the diaphragm is beneficial, for example, at about f / 8, in the vicinity of or below.

Pinhole camera

In a pinhole camera, the size of the blur circles of the object distance g, the image distance b and the hole diameter D depends. An object is imaged sufficiently sharp when:

The far point of a pinhole camera is always at infinity. For very large object distances g, the condition simplifies to: . That is, the hole diameter should not be greater than the permissible circle of confusion diameter, otherwise with a pinhole camera in the remote area not sufficiently sharp image possible.

Application in photography

Image design with depth of field

The selective use of depth of field by adjusting the aperture, the distance and the focal length makes it possible to draw the viewer's attention on the main subject. To this end, the photographer limits the depth of field as closely as possible to the level one, on which the main subject is. The foreground and background is thereby sharply imaged. This selective blur distracts less from the main subject, which is accentuated by the selective focus.

A limited depth of field can in photographic shots with point-like objects, which are located just outside the object distance sharply imaged, leading to so-called ghost spots in the recording.

For small recording formats, such as creating magnifications or when using digital cameras with small image sensors ( form factor ), the maximum permissible circle of confusion reduced ( while maintaining the same number of pixels ), which initially reduces the depth of field. However, the smaller recording formats require proportionally smaller lens focal lengths to ensure consistent viewpoint - however, the larger the depth of field. Both the reduction of the image sensors ( ⇒ reduction of the maximum permissible blur circles) and the therefore necessary downsizing of lens focal lengths, affects the depth of field. The influences are indeed in opposite directions, but they do not cancel each other out. The maximum permissible circle of confusion is linear and the lens focal length close to square one in the depth of field - so outweighs the influence of the lens focal length. Thus, the depth of field will also be larger and it becomes increasingly difficult to use selective focus as a photographic design elements directly when taking pictures. In order to compensate for both effects, the pixel density of the sensors should grow approximately quadratically with the reduction of the sensor size, which quickly leads to technical limitations.

Factors to influence the depth of field

The focus area can be due to several factors (see Section calculate depth of field) are affected:

  • It is expanded and concentrated by Fade by stopping down the aperture. So, the smaller the aperture, the greater the focus range.
  • Another factor influencing the depth of field of the imaging scale. The magnification depends on the focal length of the lens and the object distance from (the image distance ).
  • For camera systems with different screen sizes and thus corresponding to different normal focal lengths shall other things being equal ( f-number, angle of view and image resolution ) that the depth of field is more lower, the larger the screen size is. So it is with larger cameras easily possible to restrict the depth of field ( as in portraits with blurred background) than with small cameras. When a design is once so absorbed that it is the sensor height completely fills on the sensor, and even making it to the sensor by a factor of x smaller height by merely increasing the distance to the subject, so the depth of field increases under certain conditions approximately quadratically with x. Example: reduction of the image height by a factor of x = 2 leads to approximately four times the depth of field. This rule of thumb, if the distance to the subject is less than about a quarter of the hyperfocal distance. Accordingly, this rule of thumb for different sensor sizes: A reduction in the sensor height by a factor of x leads to enlargement of the depth of about a factor of x2, if the motive in both cases, the sensor height completely fills and in both cases the same aperture is set. The focal length thus has no appreciable effect, see below
  • Unlike the comparison of different camera systems with different screen sizes looks, if you do not lenses with the same f-number compares, but those with the same entrance pupil, ie lenses that can process the same light beam and use comparable large lenses: two lenses with the same entrance pupil and the same angle of view produce regardless of the sensor size the same depth of field.
  • The distribution of the depth of field in front of and behind the focused object varies with the set distance: In close vicinity of a 1:1 ratio is approximately achieved with increasing distance behind the focused object, the proportion is growing steadily; the latter extreme, if the infinity is still flat out in the Focus range ( = hyperfocal distance ).
  • The depth of field varies in certain areas virtually nonexistent, even when a subject is imaged with a short focal length from a short distance and one with a long focal length from a distance such that it in the picture has the same size. The aforementioned effect of the focal length is compensated by the other object distance. This rule applies if the same aperture is used in both cases and if the distance to the subject at the short focal length is less than about a quarter of the hyperfocal distance.
  • By the process of Focus stacking can be a seemingly extremely large depth of field can be achieved by a series of images with different distance settings is taken and the results are then re- assembled using methods of computer graphics.

Camera Settings

In the macro range, the depth of field is s defined solely by imaging scale, set aperture and as allowed blur circle diameter. It is (while the allowable circle of confusion is considerably smaller than the focal length ) is completely independent of the focal length.

It is calculated as:

In the non- macro range (the error exceeds 10% off: reduction factor > 0.3 x focal length / Unschaerfekreisradius / f-number) the formula must be extended by the correction value to:

The scope of this formula ends when you get negative values. Then, the far point is at infinity, the depth of field is then infinitely large, the far point is located behind the lens, concave wave fronts are within the focus area.

For practical use in the field:

  • One remembers for his current camera ( with many crop DSLRs to 0.4 mm)
  • For a reduction factor of 10, 5, 2, 1 has to be the value 110, 30, 5 and 2 respectively multiply ( and replaced by 44 mm, 12 mm, 2 mm and 0.8 mm).
  • This gives the depth of field for the f-number increases 10 For other f-stops or decreases this value proportionally.

Further comments:

  • Some electronically controlled cameras offer the possibility of first the front and then to mark the rear point of the desired field range with the trigger ( DEP ). The camera calculates the aperture required for this and adjusts the focus so that the sharpness corresponds exactly to the marked area. The A -DEP function of current digital cameras has thus nothing to do here, the camera determines the front and back focal point by using all AF points.
  • The adjustment of professional cameras allow the use of so-called Scheimpflug adjustment. This does not change the focus range of the lens, but also allows to shift the focal plane and thus adapt to the subject. For small and medium format cameras are there for the same purpose special tilt or swing bellows or so-called tilt lenses, a feature that is often combined with a shift function for the possible parallel shift of the focal plane.
  • Some specialty lenses feature the function of the variable object field curvature (VFC, variable field curvature ), which allows the continuous rotationally symmetric convex or concave bending of the plane of focus.
  • With a special slide rule them when traveling to perform depth of field calculations for a given lens. For a given aperture, the optimum focus point for a desired depth-of- field or the resulting depth-of- field can be determined for a given focal point. In addition, the time required for achieving a desired depth-of- area aperture can be determined.

Applications in computer graphics

Many known techniques in computer graphics, for reasons of speed direct transformations (eg, via matrix multiplications ) to convert the geometry into image data. Through these mathematical constructs, however, there also an infinite depth of field. Since the depth of field is, however, also used as a design element, various methods have been developed to mimic this effect.

In 3D computer games, the direct rendering of polygons has prevailed. This method has speed advantages over the indirect rendering ( ray tracing ) but also at the same time technical limitations. Thus, the depth of field can not be calculated directly but must be approximated in a post-processing step using a suitable filter. It involves selective soft focus, the use of the Z-buffer for edge detection. This prevents the blur of the image earlier stationary objects are included in the filtering of the background to and vice versa. Problems occur with this particular case of transparent objects because they have to be treated in separate post-processing steps, which negatively affects the speed of image construction.

, Both the method described above as well as multi-sampling can be used when indirect rendering, whereby to produce a depth of field effect very many samples are needed. Therefore, these methods are preferably used in the renderers that are Unbiased. These correspond very closely to the model of a camera style similar method, where single photons / Rays and their color value are accumulated on a film, that is, with continuous calculation and a higher number of samples the noise is being reduced. In contrast to the former method, it produces more credible and realistic results ( bokeh, etc. ), but is also orders of magnitude slower, so it is not suitable for real-time graphics.

The calculation of the images in this section was performed using a Unbiased renderer. For sufficient noise suppression in 2500 samples per pixel were necessary, which corresponds to a prosecution of approximately 11.6 billion light paths that have been followed in the scene including multiple reflections and refractions.

713234
de