Texture Mapping

The term Texture Mapping ( German as " a sample " ) refers to a method of 3D computer graphics. It serves the areas of three-dimensional surface models with two-dimensional images - equip and surface properties - so-called " textures ". Textures can be computer-generated images appear more detailed and more realistic without the underlying model must be refined.

Texture coordinates

Each vertex of a 3D object may additionally include or be associated with its position in the XYZ space, a texture coordinate in the UVW space. Based on the texture coordinate (also known as UV or uvw coordinate ) is defined as a texture ( a bit map or a mathematical texture) mapping to a polygon. If one uses a two-dimensional bitmap texture, as is the case in computer games, so will only need the u - and v - coordinates to determine which part of the image to be mapped on the polygon. In mathematical textures, such as 3D volumetric noise or textures, or the W- coordinate is often required in addition.

The uv co-ordinate (0,0) corresponds to the lower left corner of the texture and the UV coordinate ( 1.1 ) of the upper right corner. Uv - values ​​greater than 1 and less than 0 are possible and lead to edge effects repetition of the texture. These can be defined in general. Two possibilities are edge repetition or mirroring. This allows a texture on defining the texture coordinates tiles on a polygon.

It is also possible to assign a plurality of texture coordinates a vertex. This is called multiple mapping channels. In this way it is possible to display multiple images or image segments superimposed on a polygon.

In 3D models with many polygons a single texture for the whole model is often used, so that each point of the model, only one set of texture coordinates is (and not different texture coordinates for the different polygons that use this point ) because this format for hardware accelerated 3D is particularly low graphics and also for the designer of the 3D model.

In the simplest variant of the texture mapping the texture coordinates along the already transformed from 3D to 2D space boundary lines of the polygon are interpolated linearly. Then they are linearly interpolated along a screen line ( or column ) of the edge line to edge line, at each pixel the color value of the interpolated (u, v ) coordinates is associated with texel taken ( pixel in the texture).

Perspective Correction

For polygons which have a greater extension in the viewing direction, the method described above leads to visually unsatisfactory results because of the texture coordinate to be interpolated on the projection and thus does not take into account that a range of the more remote part of the projected polygon, a greater distance in the original polygon corresponds in 3D space, as a distance in the surrounding part of the polygon. This changes the assignment of texture coordinates to points in three-dimensional space when the perspective changes.

To solve this problem, usually instead of the texture coordinates u and v are the values ​​of u / z and v / z and furthermore also 1 / z linearly interpolated, where z is the coordinate in 3D space in the viewing direction (z and 1 / z must therefore be stored for each projected point of the polygon ). To calculate the texture coordinates for a pixel, now divisions must be performed:

U = (u / z ) / (1 / z)

V = (v / c) / (1 / z)

Because divisions are relatively slow operations, they are not usually made ​​at each pixel; Instead, u and v are only a few pixels, which are evenly distributed on the polygon calculated. For all other pixels, the values ​​of u and v are interpolated those pixels between those. So you can reduce the disruptive effects a long way without too much computing power needs to be spent on it.

Texturinterpolation

The methods described so far assume for simplicity that each pixel can be assigned to exactly one texel. However, considering both pixels and texels as points without stretch, so this is not the case in general. Rather, the texture coordinates of a pixel are generally between multiple texels. It is therefore necessary to decide how gained from the color values ​​of the surrounding texels of the color value for the pixel: You need a suitable scaling procedure.

The easiest and fastest scaling method is to simply select the closest texels. This method is called nearest neighbor ( " nearest neighbor " ) or point sampling ( " pointwise sample "). When more complex bilinear filtering the searched color value from the four surrounding texels is interpolated as a function of their distance. Even more complex filters such as the Gaussian filter, pull more texels in the calculation or weighting the distance differently. Since unsuitable interpolation lead to undesirable aliasing effects - for example, moiré effects - a compromise between speed and artifact formation must be found.

MIP- mapping

These techniques are applied, as long as the pitch of the pixels is smaller than that of the texel, any pixel that is assigned to more than one texel. Is the pitch of the pixels, however, greater than that of the texels is equal to a pixel corresponding to a whole area of the texture. While it is not difficult to make the color value as the mean of all the Texel, but this is very complex - for a single pixel number of arithmetic operations must be done - and therefore not practical.

Instead, they used MIP Maps. These contain in addition to the original copies of the texture texture with decreasing size, so-called " levels of detail " (level of detail, LOD). Is chosen from the largest level of detail, which produces the normal state "pixel is smaller than texel " again, as it operates on the original texture. In addition to the previous interpolation, this offers an opportunity to carry out a further linear interpolation between two successive levels of detail; in combination with the bilinear filtering is obtained as a trilinear filtering. The use of MIP maps in conjunction with point sampling reduces aliasing already strong, in conjunction with more sophisticated filtering and interpolation between the levels of detail can be reduced to a minimum.

Anisotropic filtering

The applications described above consider Pixel and Texel as points, ie as one-dimensional objects without expansion. Instead, they can however also be regarded as small squares. Then it must be remembered that a pixel, the one projected onto the texture, there is not a square but a stretched in one direction surface when the polygon is considered to expand in the viewing direction. If in filtering these different propagation in different directions ( anisotropy ) takes into account the pixel in texture space, it is called anisotropic filtering.

Specific methods

There are various methods to make a covering of a three-dimensionally textured surface appear:

  • When Bumpmapping the lighting calculation is done with a normal vector on the surface variiertem.
  • When displacement mapping to be created with the information from the texture extra polygons.
  • In environment mapping mirroring is simulated with the help of the texture.
13891
de