Rendering (computer graphics)

Image synthesis or render (German: calculate ), in computer graphics to generate an image from raw data ( also called scene ). A scene is a virtual three-dimensional model, the objects and their material properties, light sources, and the position and direction of view of an observer defined. Computer programs for rendering images are called renderer.

When rendering, usually following tasks must be solved:

  • The determination of the virtual observer from visible objects ( occlusion calculation)
  • The simulation of the appearance of surfaces influenced by their material properties (shading )
  • Calculating the distribution of light within the scene, which manifests itself, among other things by the indirect illumination between bodies.

In addition, the production of computer animations requires some additional techniques. An important application is the interactive synthesis of images in real time, with the most hardware acceleration is used. In the realistic image synthesis, however, is designed to provide high image quality or physical correctness value, while the computation time plays a subordinate role.

  • 2.1 Light sources and shadow
  • 2.2 Local illumination models
  • 2.3 Interpolated Shading
  • 2.4 Mapping Techniques
  • 3.1 raytracing
  • 3.2 radiosity

Real-time rendering

When real- time rendering of a series of images is quickly calculated and changed the underlying scene by the user interactively. The calculation is performed sufficiently fast that the image sequence is regarded as a dynamic process. From a frame rate of approximately 6 fps interactive use is possible at 15 fps can speak with certainty of real-time. On modern computers, the real-time rendering is supported by hardware acceleration using graphics cards. With few exceptions graphics hardware only supports points, lines and triangles as basic graphic objects.

Graphics pipeline

When real- time rendering, the graphics pipeline describes the path from the scene to the final image. It is a conceptual model, which can vary depending on the system. The graphics pipeline is often implemented in parts like processor pipelines where calculations are performed in parallel. A graphics pipeline can be divided into three major steps: application, geometry and rasterization.

The application step makes all changes to the scene, which are defined by the user in the context of interaction, and passes it on to the next step of the pipeline. In addition, here are techniques such as collision detection, animation, morphing and acceleration techniques using spatial subdivision schemes are used.

The geometry step takes a lot of the operations on the vertices, the vertices of the base objects. It can be divided into different steps completed in sequence transformations in different coordinate systems. To simplify the perspective diagram of the geometry of nearly all geometric operations step working with homogeneous coordinates. Here, points are defined by four coordinates and transformations by 4 × 4 matrices.

First of all the basic objects in the scene are transformed so that the virtual observer looks along the z (depth ) axis. If the scene contains light sources, a color is calculated for each vertex using the material properties of the corresponding triangle. The volume visible from the observer of the scene is a truncated pyramid ( frustum ). In the next step of this frustum is transformed into a cube, which corresponds to a central projection. Partially or completely out-of- view volume primitives are clipped or removed by clipping and culling techniques. Finally, a transformation is applied which shifts the vertex - coordinates of the desired character of the screen. The z- coordinates remain continue to receive, since they are needed for later occlusion calculation.

In the screening step, all remaining projected primitives are rasterized by the pixels belonging to them are colored. Since overlapping only the visible portions of triangles to be displayed, a Z-buffer is used, which takes the masking calculation.

For the control of graphics pipelines usually graphics APIs are used, the abstract the graphics hardware and remove the many tasks the programmer. Essential to the development of Echtzeitrenderns imposed initially by Silicon Graphics OpenGL standard has contributed. The latest new features of OpenGL and Microsoft's DirectX are mainly used in modern computer games their application. Except for DirectX and OpenGL there were other approaches, such as Glide, but could not prevail. In professional fields, OpenGL has great significance. DirectX is, however, highly optimized on the development of games. DirectX is a proprietary software that is available only on Microsoft Windows; it is not an open standard.

Past techniques

The first interactive technique for masking calculation was published in 1969 by Schumacker and others. Mackers training algorithm was used to flight simulation for the U.S. armed forces, an application that was at the always invested heavily in graphics hardware.

In the early days of computer games with interactive 3D graphics all computationally complex graphics operations were yet accepted by the main processor of the computer. Therefore, only very simple and limited rendering method could be used. The first-person shooter Wolfenstein 3D ( 1992), used for masking calculation ray casting, bringing only a fixed height dimension perpendicular and adjacent spaces could be represented. To increase doom raycasting combined with a two-dimensional binary space partitioning the efficiency and in addition to be able to render complex scenes.

Shading and direct illumination

As Shading ( German: shading) the calculation of the colors is referred to surfaces using the associated material properties and the directly arriving from the light sources in general. The shading is used both in the real-time rendering as well as in realistic rendering are used. The indirect lighting from other surfaces is initially ignored. A special case represent not photorealistic shading techniques (Non- photorealistic rendering), where, for example, for aesthetic reasons distortions are generated, such as cel shading for comic -like images.

Light sources and shadows

In modeling various often physically correct types of light sources are common. Directional Lights send parallel rays of light without attenuation in a certain direction, point light sources emit light in all directions, and spot lights emit light only in a cone- shaped area. In reality lights have a certain area; the light intensity increases quadratically with distance. This is considered in the realistic image synthesis, while generally only simple light sources are used during real-time rendering.

Shadows are an important element of computer graphics because they give the user to draw conclusions about the placement of objects in space. Because light sources have a certain size, shadows appear blurry in reality more or less. This is taken into account in realistic rendering method.

Local illumination models

Local illumination models to describe the behavior of light on surfaces. If a particle of light is incident on a body, it is either reflected, absorbed or - except for metals - broken inside the body. Only on very smooth surfaces incident light is reflected; For non-metallic objects, the relative proportion of light reflected and refracted by the Fresnel formulas will be described.

Microscopic bumps cause the light is not mirrored, but with a certain probability in a deviating direction is reflected. The probability distribution that describes this behavior for a material is, bidirectional reflectance distribution function called ( BRDF ). Local illumination models are usually parameterized BRDFs. Ideal diffuse surfaces can, for example, with the Lambertian law and shiny surfaces simulated with the Phong lighting model. When real- time rendering of a combination of a diffused, a glossy and a constant factor is often used. More physically plausible models have been developed for realistic image synthesis.

The BRDF assumes that the incident at a point on the surface of light emerges exactly there again. In reality, non-metallic body scatter light in their interior, which results in a softer appearance. The simulation of this volume scattering is mainly for the realistic image synthesis of meaning.

Interpolated Shading

The real-time rendering, there are three common ways to compute the illumination of a triangle. The flat shading, the color is calculated for a triangle and all the triangle is filled with that color. This causes the facets that make up the model composed, clearly visible. The by most graphics cards supported Gouraud Shading, however, determines the color at each vertex of a triangle, so that is interpolated in the rasterization between these color values ​​and a softer appearance than the flat shading results. When Phong shading along with each vertex is the normal at this vertex available. In the screening is interpolated between the normals and computes the local illumination model in accordance with this normal. This method avoids some problems of representation Gouraud shading.

Mapping techniques

Normally, local lighting models are applied consistently to an entire object. Mapping techniques are used for simulation of surface details due to color or structural variations. Here, the material or geometric properties varies at each point of the surface based on a function or raster graphics. Many mapping techniques are also supported by graphics hardware. In addition to the procedures listed below many other mapping techniques have been developed.

  • Texture mapping is the oldest mapping technique and serves to represent a two-dimensional image (texture ) on a surface or to " paste " it. In addition to raster graphics also procedural textures are used in which the color is determined at a point by a mathematical function. In determining a color value different filtering methods are also possible. On graphics hardware mip mapping is in use, in which there is the texture of different image resolutions for reasons of efficiency.
  • Bump mapping is used to simulate surface irregularities. The actual normal vectors to the surface by a bump map to be disturbed. The geometry of an object is characterized, however, not affected.
  • Displacement mapping is also used for the simulation of surface irregularities, in contrast to bump mapping, the surface geometry is actually changed. Since it usually insufficient vertices are available, additional surface points are inserted, which will move to a height field.
  • Environment mapping, or reflection mapping is used to simulate reflection effects in real-time rendering. For this purpose, a beam is emitted from the observer to the reflecting object and reflected. In contrast to the ray-tracing (see below) is not charged with the closest surface of the intersection of the reflected beam. Instead, with reference to the direction of the beam, the color value of a predicted image of the scene is determined.

Realistic rendering and global illumination

How realistic acts rendered image depends largely on the extent to which the distribution of light was calculated within the scene. While the shading only the direct light is calculated, the reflection of light between objects plays a role in the indirect lighting. Thus, effects such as rooms, the overall lit only by a narrow slit of light possible. The light path notation is used to indicate the capability of a rendering algorithm with respect to the simulation of the lights. Are all types of light reflection taken into account, it is called global illumination. It must be taken into account for a realistic result and is not, or only very limited for real-time process.

Mathematically, the global illumination rendering equation described by the indicating means of radiometric quantities, the amount of light reaches a surface point from another point of view according to a surface reflection. The rendering equation can be combined with ray tracing, charge for special cases also with radiosity. In addition to these two major techniques for realistic image synthesis of the REYES system, especially in the film technique used variants.

Raytracing

Ray tracing is primarily an algorithm for occlusion calculation, which is based on the perspective emitting rays from the observer. Each beam is tested against all of the basic objects of an intersection point and, optionally, computes the distance to those objects. The visible object is the one with the lowest distance. In advanced forms of ray tracing can simulate light reflections and interruptions.

To calculate the global illumination using ray tracing, the incident at that pixel " intensity " using the Render equation must be determined. This is done by means of a Monte Carlo simulation, in which many light rays are emitted randomly on the surfaces. Such raytracing techniques is called Monte Carlo ray tracing; the simplest of these methods is path tracing. These algorithms are relatively time consuming, in scenes with complex lighting conditions and different materials, however the only possibility. With proper implementation, they also provide an unbiased images. This means that the image noise is the only deviation from the correct, fully converged solution. Photon mapping is used to accelerate the calculation of light distribution ray tracing, but can lead to visible artifacts ( artifacts ).

Radiosity

Radiosity algorithm is in its basic form, only applicable to an ideal diffuse surfaces, and based on the subdivision of the surface into small sub-areas ( patches). Under these conditions, can be combined with the rendering equations for each patch a system of linear equations set which is solved numerically; Radiosity thus belongs to the finite element methods. Radiosity can indeed extend to any materials, the precision is limited by the number of patches and the resulting memory requirements. An advantage over ray tracing is that the light distribution is calculated blickpunkt independent and the masking calculation is not part of the actual radiosity algorithm. As a result, radiosity is particularly suitable for rendering static or animated little scenes in real time, if a time-consuming pre-calculation is acceptable.

Volume graphics

In the volume rendering to render the objects are not described as surfaces rather than spatial datasets as Voxelgittern. Voxel grid-like arranged contain values ​​which describe the " density " of an object. This form of data representation is particularly useful for objects that do not have clear contours, as clouds. To render Voxelgittern special techniques are required. Since numerous imaging techniques produce voxel data, volume rendering is also of importance for medicine.

14463
de