Ray tracing (graphics)

Ray Tracing ( ray tracing or ray tracing German, in English spelling usually ray tracing ) is based on the radiation emission concealment algorithm for calculation, ie to determine the visibility of three-dimensional objects from a certain point in the room. Also with ray tracing is referred to several extensions of this basic process that calculate the further path of rays after striking surfaces.

The most prominent use is raytracing in 3D computer graphics. Here, the basic ray tracing algorithm is one way to represent a 3D scene. Extensions that simulate the path of light rays through the scene, serving, as well as the radiosity method, the calculation of the light distribution.

Further areas of applicability of ray tracing are auralization and high frequency technology.

  • 3.1 acceleration techniques
  • 3.2 Memory Requirements
  • 4.1 shadow
  • 4.2 Recursive ray tracing
  • 4.3 Diffuse raytracing
  • 4.4 path tracing and light ray tracing
  • 4.5 Further developments
  • 4.6 Overview
  • 4.7 special
  • 5.1 Computer Graphics
  • 5.2 Other areas of application

Origin and Meaning

Before the development of raytracing the young field of 3D computer graphics is essentially consisted of a series of " programming tricks " that tried to imitate the shading of illuminated objects. Raytracing was the first algorithm in this field, which resulted in a certain physical meaning.

The first calculated with ray tracing image in 1963 spent at the University of Maryland on a oszilloskopartigen screen. As the developer of the ray tracing algorithm are often regarded Arthur Appel, Robert Goldstein and Roger Nagel, the algorithm published in the late 1960s. Further, those who studied at this time with ray tracing techniques were Herb Steinberg, Marty Cohen and Eugene Troubetskoy. Ray tracing is based on the geometrical optics in which the light is seen as a group of rays. The techniques used in ray tracing have been much earlier, including manufacturers of lens used. Today, many renderers use (computer programs for generating images of a 3D scene ) ray tracing, possibly in combination with other methods.

Simple forms of ray tracing to calculate only the direct illumination, ie the directly arriving from the light sources. Ray tracing, however, was substantially expanded several times since its first use in computer graphics. More sophisticated forms take account of the indirect light reflected by other objects; one then speaks of a global illumination method.

The term usually refers to a simplified form of ray casting ray tracing, but partly also used synonymously.

Basic principle

The generation of a raster image of a 3D scene is called rendering or image synthesis. Precedes the creation of such a scene by the user using a 3D modeling tool.

In the description of the scene at least the following data are given:

  • The position of elementary primitives, such as polygons or spheres that compose the objects in the scene;
  • The local illumination model and the parameters that determine the colors and materials of the individual objects in the scene;
  • The light sources of the scene.

In addition, the ray tracing and the position of the eye point, and an image plane is specified, which together specify the perspective from which the scene is viewed. The eye point is a point in space that corresponds to the position of a virtual camera or a general viewer. The image plane is a virtual rectangle that is located some distance from the eye point. She is the three-dimensional representation of the data to render raster image in space. A grid-like distributed points on the image plane corresponding to the pixels of the raster image to be generated.

Masking calculation

Ray tracing is primarily a method for masking calculation, ie to determine the visibility of objects from the eye point. The basic principle is quite simple.

Ray tracing uses a data structure called a jet, which indicates the starting point and the direction of a half-line in space. Is calculated for each pixel, the direction of the beam pointing from the eye point of the corresponding pixel of the image plane. For each primitive of the scene is now possible by means of method of the geometrical intersection point at which the beam impinges on the primitive is determined. Where appropriate, the distance is calculated from the eye point to the intersection point. The " winner", ie the eye point of the visible primitive is the one with the lowest distance.

The principle of emission of the radiation from the eye point of the structure is similar to a pinhole camera, in which an object is imaged on a film. When ray tracing ( image plane ) and "hole" ( eye point ), however, are " movie" reversed. Similar to the pinhole camera the distance between the image plane and the eye point determines the " focal length " and thus the field of view.

Since the rays do not go out as in the nature of the light sources, but from the eye point, one also speaks of Backward Ray Tracing. Ray tracing is concerned with the question of where it comes from. Some publications call this method, however, forward ray tracing or Eye Ray Tracing.

Intersection tests

The above-mentioned test for a possible intersection of the beam and Primitive is the centerpiece of the ray tracing. Such tests can be formulated for a variety of primitive types. In addition to triangles and spheres, among other cylinders, quadrics, point clouds or even fractals are possible.

When balls of the intersection test is a relatively short and simple procedure, which explains the popularity of these objects on raytracing test images. Many rendering programs can, however, for reasons of simplicity, only triangles as primitives from which any object can be composed approximately.

Recently, more complex geometries for the intersection test, such as NURBS be used. Advantageous in this case is a maximum of precision, since the area is not as usually divided into triangles. The downside is an increased rendering time because the intersection test with complex free-form surfaces is much more complex than with simple triangles. Although a reasonable approximation to the accuracy of NURBS is also possible with triangles, but in this case a very large number must be selected.

Shading

When determining the next primitive not only the intersection point and its distance to the eye point, but also the normal of the primitive is calculated at the intersection. Hence all information is available to determine the eye point to the reflected " light intensity ", and hence the color. In this case, the descriptions of the light sources in the scene are used. Calculations are based on local lighting models that simulate the material properties of an object. This part of the renderer, which is responsible for determining the color, called shaders.

Sample Code

The programming of a simple ray tracer requires minimal effort. In pseudocode, the principle is represented as follows:

Procedure Bild_Rendern      Strahl.Ursprung: = eye point      For each (x, y ) pixel of the raster graphics          Strahl.Richtung: = [3D coordinates of the pixel of the image plane ] - eye point          Color (x, y) pixel: = Farbe_aus_Richtung (ray) Function Farbe_aus_Richtung ( beam )      Intersection: = Nächster_Schnittpunkt ( beam )      If Schnittpunkt.Gewinner = (no ) then          Farbe_aus_Richtung: = background color      otherwise          Farbe_aus_Richtung: = Farbe_am_Schnittpunkt ( beam intersection ) Function Nächster_Schnittpunkt ( beam )      MaxDistanz: = ∞      Schnittpunkt.Gewinner: = (none)      For each primitive of the scene          Intersection: = Teste_Primitiv ( primitive ray )          If Schnittpunkt.Distanz < MaxDistanz then              MaxDistanz: = Schnittpunkt.Distanz              Schnittpunkt.Gewinner: = primitive      Nächster_Schnittpunkt: = intersection Every raytracer, regardless of the ray-tracing variant used, follows a similar structure, even an intersection test ( Teste_Primitiv ) and a shader ( Farbe_am_Schnittpunkt ) includes.

Performance

Acceleration techniques

In determining the first primitive to which hits a beam, as in the example code listed earlier, each primitive in the scene are tested against the beam. However, this is not always required if it is known that certain primitive anyway not in the vicinity of the beam and therefore can not be taken. Since intersection tests claim the largest running time raytracing, it is important to test as few primitives as possible to the beam in order to keep the total running time low.

In the acceleration process, the scene is automatically divided usually in some form and assigned to the primitives these subdivisions. When a ray travels through the scene, it is not tested against the primitives, but first against the subdivisions. Thus, the beam must be tested only against the primitives that subdivision that crosses the beam.

It has developed a number of such acceleration techniques for ray tracing. Examples of subdivision schemes are voxel, BSP - trees and bounding volumes that enclose the primitives and form a hierarchy. Combinations of these techniques are also popular. Also, there are special animations for acceleration techniques. The complexity of these techniques let a ray tracer quickly grow into a larger project.

No equipment is always optimal; the efficiency is scene dependent. However, each acceleration method reduces the running time enormously and makes ray tracing only at a practical algorithm. In Kd - trees based subdivisions are the most efficient or nearly efficient technology for most non - animated scenes, as they can be optimized by means of heuristics. Was found multiple that the asymptotic running time of ray tracing, depending on the number of primitives is logarithmic.

It has been shown that on modern computers do not limit, the processor performance, but the memory access speed of ray tracing. By careful use of caching by the algorithm, it is possible to reduce the running time significantly. It is also possible to use the SIMD capabilities of modern processors, which allows parallel computations, and specially optimized subdivision schemes. In order for the simultaneous tracking of multiple, in "packages" summary, radiation is possible. Reason for this is that the rays emitted from the eye point are usually very similar, so tend to perform the same objects. With the SSE instruction set as can be tested simultaneously at an intersection with a primitive four beams, which this calculation is accelerated by a multiple. On appropriate hardware implementations - for example on FPGAs - may also bigger packages are tracked with over 1000 rays. However atone caching and SIMD optimizations for advanced forms of ray tracing a lot of its speed advantage.

Furthermore, it is possible to parallelize the entire ray tracing process. This can be characterized as trivial effect that different processors or machines to render different parts of the image. Only certain acceleration techniques or extensions must be adapted to be parallelisierungstauglich.

Memory Requirements

The basic ray tracing method requires very little memory. However, the scene is itself, which nowadays often composed of complex scenes from several million primitives, lots of memory and can contain gigabytes. In addition to the more or less significant additional memory requirement of the acceleration techniques. Since such large scenes not fit entirely into the main memory of the computer, swapping is often necessary.

For larger objects that are several times in the scene exists and differ only by their position and size ( for example in a forest full of trees), does not have to be re-saved the entire geometry. Through this technique called instancing can save considerable space in certain scenes.

Extensions

One of the reasons for the success of ray tracing method is its natural extensibility. The primitive method described above is insufficient for the needs of today's image synthesis. With increasing computing power and increasing inspiration from physics - especially optics and radiometry - came up with several extensions and variants, some of which will be briefly presented here.

In principle, with any extension, the achievable quality of the rendered images and the relative time required increased sharply and with path tracing reached the maximum. Only subsequent developments were aimed to reduce the time of path tracing, without losing quality.

Shadow

The flexibility of the ray tracing algorithm, it is possible to emit light beams, not only from the eye point, but also of any other points of the space. How Arthur Appel in 1968 demonstrated, this can be used to simulate shadows.

Any point of a surface is then exactly in the shade, if there is an object between him and the light source. By a shadow ray is emitted in the direction of the light source from the intersection on the surface, can be determined if an object crosses its path. If this is the case, there is the point of intersection in the shade and brightness of the beam will be returned as 0. In the other case normal shading takes place.

Recursive ray tracing

Ray tracing can be applied not only to simple opaque, but also transparent and specular reflective objects. In this case, more light beams are emitted from the points of intersection. If the reflecting surface as merely the direction of the outgoing beam from the surface according to the law of reflection must ( angle of incidence equals angle of reflection ) taken into account and a corresponding reflected beam can be calculated.

In light-transmitting properties, a beam in accordance with the Snell's law will be sent, this time into the inside of the object in question. Generally transparent objects also reflect some of the light. The relative color components of the reflected and the refracted ray can be calculated with the Fresnel formulas. These rays are also called secondary rays.

Since the secondary beams fall onto other objects, the algorithm is called recursively to enable multiple reflections and refractions of light. The hierarchical totality of the calls is also called Render Tree.

Recursive ray tracing has been developed around 1980 by Kay and Whitted.

In pseudo-code the shader looks the recursive ray tracing in something like this:

Function Farbe_am_Schnittpunkt ( beam intersection )      If Schnittpunkt.Gewinner.Material = specular or transparent then          Reflektierter_Anteil: = Fresnel ( beam intersection )          Color: = Reflektierter_Anteil × Farbe_aus_Richtung ( reflected beam )                 Gebrochener_Anteil × Farbe_aus_Richtung ( Cracked beam )      otherwise          Color: = 0          For each light source              Shadow ray: = Lichtquelle.Position - Schnittpunkt.Position              Shadow intersection: = Nächster_Schnittpunkt ( shadow ray )              If SchattenSchnittpunkt.Gewinner = light source then                  Color: = color Direkte_Beleuchtung ( beam light source)      Farbe_am_Schnittpunkt: = Color The rest of the program can like stick to the simple ray tracing. The requested function here Farbe_aus_Richtung can in turn call Farbe_am_Schnittpunkt, from which the recursive nature of the procedure is clear.

Diffuse raytracing

Recursive ray tracing allows besides light refraction and reflection of the simulation of hard shadows. In reality, light sources have a certain size, which means that shadows are soft and fuzzy.

This effect, as well as anti -aliasing, glossy reflection and more, can be simulated with diffuse ray tracing (also stochastic raytracing or distributed ray tracing called ), which was published in 1984 by Cook et al. The idea is to transmit multiple beams in different situations, instead of a beam and to form the computed color from the average value. For example, can be soft shadows with core and penumbra produce by the directions of the shadow rays randomly distributed, the surface of the light source scan. The disadvantage is that this noise occurs when too little rays are used. However, there are options such as importance sampling, reduce the noise.

Path tracing and light ray tracing

Although diffuse ray tracing allows for numerous effects, it is still not able to simulate the global illumination with effects such as diffuse inter- reflection and caustics ( generated by pooling of light bright spots ). This is because that is indeed emitted at specular reflections, but not for diffuse surfaces secondary beams.

Published in 1986 in his publication James Kajiya described the rendering equation, which is the mathematical basis for all of the methods of the global illumination. The beam from a contributed "Brightness" is this correct radiometric interpreted as radiance. Kajiya showed that the global illumination secondary beams must be emitted from any surface. In addition, he also pointed out that a render tree has the disadvantage that too much work for the calculations is wasted in large hierarchy depth and it's better each emit a single beam. This method is now known as path tracing, as a ray from the eye point looking up from his "way" through the scene. Path tracing has a rigorous mathematical and physical basis.

If a light source directly hits the path tracing the light emitted from a diffuse surface secondary beam, so that brightness component is usually ignored. The proportion of direct lighting will instead continue to be calculated by shadow ray. Alternatively, the direct illumination may be calculated by using only one secondary beam is emitted in accordance with the local illumination model, and if this applies, a light source directly, the beam density is returned. Which of these two methods of efficient, depending on the local illumination model and the surface of the solid angle viewed from the surface of the light source. The conceptually simpler variant of path tracing, in which no shadow rays are emitted is known as the adjoint photon tracing.

Although path tracing can simulate global illumination, the efficiency of the process decreases with small light sources. In particular, caustics and their reflections are with path tracing very noisy, unless a large number of rays are emitted. Therefore, most others, used on Path Tracing based methods or extensions.

Light ray tracing is a rare variant in which the light beams are not emitted from the light sources from the eye point but. The pixels that are hit by the beam on the image plane are colored. This specific effects such as caustics can be good, but simulate other effects only very inefficient, since many rays miss the image plane.

Further developments

Since some effects only from the eye point, let others simulate well only by the light sources, algorithms have been developed that combine both methods. The goal is to be able to render scenes with arbitrarily complex light distribution and reflection efficiently.

Survey

Special

The listed common variants of ray tracing can be extended to allow for additional effects. A few examples:

Areas of application

Computer Graphics

Ray-tracing calculations are considered to be very time consuming. Ray tracing is therefore primarily used in the production of images in which the quality rather than the computation time in the foreground. To compute an image with ray tracing, can vary depending on the technique, the scene complexity, the hardware used and the desired quality used arbitrarily long - often in practice several hours, in some cases even several days - last. Must be calculated in areas such as virtual reality, in which spatial representations in real-time ray tracing has been unable to enforce therefore. Computer animated films are mainly produced with the REYES system, in which ray tracing calculations avoided as much as possible. Occasionally, was used by the demoscene raytracing.

Compared to conventional Echtzeitrenderern on z-buffer basis, however, ray tracing has several advantages: a simple implementation with manageable complexity, in contrast to the graphics pipeline high flexibility and ease of interchangeability of the shader and in an easier implementation of new shaders. The speed of ray tracing must therefore be set in relation to the achieved image quality. For the demanding quality requirements of realistic image synthesis, there are, especially in complex scenes with any material, no alternative to ray tracing.

There are efforts to implement real-time ray tracer for complex scenes, which is already managed under certain conditions with processor- and memory- optimized software solutions. On the hardware optimized implementations of ray tracing to show that the future widespread use of ray tracing in real- time domain is possible. With these applications, projects deal as the OpenRT programming interface and several implementations for programmable graphics processing units ( GPGPU ). In addition, special architectures for hardware accelerated ray tracing have been developed.

Other areas of application

The ray-tracing principle can be extended to any application areas in which the propagation of waves in a scene to be simulated. Rays always represent it the normal vectors to a wavefront. In the auralization and high frequency technology one tries to simulate the impact of a scene on the acoustic or an electro- magnetic field. The goal is to calculate, for certain frequencies the energy component which is transmitted from a transmitter to a receiver through the various possible paths through the scene.

In acoustics, ray tracing is a way to solve this problem in addition to the mirror sound source method and the diffuse sound calculation. The simulation, the material properties of the different body, and the damping of the sound must be taken into account by the air.

One possibility of finding the transmission paths is to emit rays from a source isotropically ( in all directions ) might reflect energy loss of the material and to determine the total energy of the incident radiation to the receiver. This method is called Ray launching. Radiation can also have a certain "form" - about a tube - have to be able to simulate point-like receiver can. The disadvantage of this method is its slowness, as many rays never reach the receiver and a high number of accurate statistics needed. Another problem results from the fact that the wavelength is often not in relation to the dimensions of the body can be neglected within a scene. If the diffraction is not considered by radiation, it may therefore be a noticeable errors in the simulation.

97047
de