Volume rendering

The volume rendering is a field of 3D computer graphics. In the volume graphic objects are modeled by voxel.


The volume rendering is capable of (partially) transparent objects - for example, translucent skin - and objects without sharp boundaries - such as clouds - to model and depict with high accuracy and realism. It is for imaging techniques of great importance, as these inherently produce voxel data, otherwise it is not widespread. This is because so far there is no device- based acceleration, such as is common with graphics processors for surface graphics.

The physical basis

The volume rendering is based on the radiative transfer, which describes how light behaves on the road through a volume and the bottom line is the radiative transfer equation:

Since the calculation of the transport equation requires a high computational cost, volume rendering first encounters three simplifications:

  • Voxels are directly illuminated only by light sources. Indirect light by scattering and reflection of other voxels is not considered.
  • There are only direct light sources, no fluorescent or luminous objects within the scene.
  • The illumination will be described by a simple illumination model; has proved the Phong lighting model.

With the scattering vectors of the equation now can be ignored; clearly follows one by only a beam of light never changes direction. This allows the transport equation converted into the simpler visualization equation:

This differential equation can be solved analytically. It follows the discretization of the solution and the estimation of some smaller terms. The result is the following recursive formula used for rendering:

Which is usually given in shorthand notation with the over- operator:



When rendering volume graphics there are four steps:




The order of these steps is not necessarily predetermined, generally only the two rules: The classification must take place before the shading and compositing comes last.


In the classification of the voxels material properties are assigned. In the production of only one value of a voxel is read in the computer tomography, for example, X-ray density in the magnetic resonance imaging, the content of protons or hydrogen nuclei. This value does not provide enough information about the material, about what color it is or whether it is strongly specular. In the classification, therefore, a whole range of values ​​is made ​​according to user specifications from the single value. If, for example, the high value of a voxel for a high water content, one might conclude that it is soft body tissue, which in turn slightly pink, somewhat translucent and is not at all reflective. Usually, the material on the Phong lighting model is characterized.


When shading (English " shading " ) determines how much light is reflected from a voxel in the direction of the viewer and what color it is. Usually, the Phong shading is used to. To use the Phong shading in its original definition, each voxel would need a Normal. The normal is a vector perpendicular indicated in the surface graph of the observed surface away. In volume graphics But there are no surfaces, another way must be trodden. Instead of the normal of the gradient is therefore used in each voxel. The gradient always points in the direction of the strongest material change and with that comes the physical idea that light is reflected only at the transition between different layers of material, the next.


When interpolating, also resampling (English " reselection of samples " ) called, the material properties are approximated at points between the voxels from the surrounding voxels. Voxels are zero-dimensional points, and thus objects, that is, they possess neither length nor width nor height. This makes it extremely unlikely that a running through the volume of sight meets even a single voxel. The interpolation method defines how to win from the voxels information about the space lying between them. In volume rendering, the linear interpolation is predominant; between two voxels is linear, between four voxels that form a rectangle, bilinear and between eight voxels that form a square, trilinear interpolation.


When compositing (English " mixing " ) supplied by the in-line voxels light contributions are offset to each other to obtain a final pixel. Compositing is the actual summation of the equation to visualization: light passes through a voxel and thereby changing the intensity and color, then it crosses the next voxel and in turn, changes intensity and color. This series continues until the light is incident on the image plane and there einfärbt one pixel.


Over time, four techniques have been developed to render voxel data:

  • Volume ray-casting. Similar to the ray tracing viewing rays are thrown into the volume here. After the interaction of the light is calculated with the volume, the contributions along the line of sight are summed to give a more pixels in the image.
  • Splatting. The voxels are sequentially thrown into the picture, where they burst into a splat and distribute their color contribution to a plurality of adjacent pixels of the image.
  • Shear - Warp. The voxel itself is sheared and distorted according to the perspective, then a line of sight is sent into the volume for each pixel. Due to the shear and the lattice distortion of the line of sight within the volume runs parallel to the axes and can be calculated very easily.
  • 3D texture mapping. Instead of a volume rendering technique to use, it creates here a primitive and assigns the voxel as a 3D texture to. The visualization of the volume is then performed via the texturing in the normal surface graphics.

A distinction is image-based and object-based techniques, depending on whether the calculation of the image or the volume goes out, and mixed forms of the two. Volume ray casting is an image-based method, an object-based splatting, shear - warp and 3D texture mapping are hybrids.


Shadows in Voxelgrafiken is not a trivial task. The techniques used in shadow volume and surface graphics Z-buffer shading are not applicable, since they require opaque surfaces. To date, there are only two methods that are fairly simple:

Grants technique computes a shadow using a single light source that ejects an angle of 45 ° relative parallel to the viewing direction of the observer light in the scene, that is infinitely far away. First, a z-buffer for the front vertical plate of the volume will be set, which is initialized with the value 1 throughout. Now the shading is calculated for this layer by the radiation toward viewer * ( value in the Z buffer ) is calculated for each voxel as ( light intensity of the incident light ). Now, the Z- buffer is a voxel shifted away from the light source and to the rear side. From each value in the Z buffer, the absorption is subtracted by the immediately preceding it in the direction of the light source voxel. It follows the Abstrahlberechnung, followed by a further displacement, etc.

Meinzer technique operates similarly Grants technique, but using an additional light source behind the observer himself, which lies in a plane with the light source obliquely oriented. This allows the required Z- buffer to a Voxelreihe out. The additional light source the shadow is lightened.

  • Image synthesis