Light field

The light field is a function that describes the amount of light falling on each point of the three-dimensional space in all directions.

History

As Michael Faraday first described in his Thoughts on Ray vibrations [ 1 1], and that light can be interpreted as a field, similar to the magnetic field, had worked on the Faraday for several years. The term light field was coined by Arun Gershun in a publication on radiometric properties of light in three-dimensional space [ 1 2]. In the recent history of computer graphics, the term was a little reinterpreted.

The plenoptische function

In geometrical optics, the light is defined by the light beams ( The concept of geometric optics, is based in part on the assumption of incoherent radiation and object sizes which are significantly larger than the wavelength of light ). The measure of the amount of light that is transported along this light beam is the beam density. The beam density is denoted by L and is measured in watts ( W) per steradian (sr ) per square meter (m2). Steradians the measure of the solid angle and is the measure of the square cross-sectional area.

The distribution of the radiance of light rays along a region of the three-dimensional space, which is caused by static, non- time- variable light source is referred to as plenoptische function [1 3]. The plenoptische function is an idealized function that is used in image processing and computer graphics to describe an image from any position from any angle at any time. Practically the plenoptische function is not used, but it is useful to understand various other concepts of image processing and computer graphics. As straight beams ( and ) can be described by their position in three spatial coordinates (x, y and z) and two angular, it is a five-dimensional function (wavelength, polarization, angle, and the time may be considered as another variable, if necessary, whereby results in a higher dimensional function. )

As Adelson defined Gershun the light field at each point in space as a five dimensional function. He treated it as an infinite collection of vectors for each incident direction or orientation to a point, with lengths that are proportional to the beam density. The integration of these vectors over the whole sphere of possible orientations at a point results in a single, scalar value - the irradiance. The graph is reproduced from Gershuns publication shows this calculation for the case of two light sources. In computer graphics, this " vector -valued " function of the three-dimensional space is also called the irradiance vector field [1 5]. The vector direction of a point in the light field can be interpreted as the direction normal to a surface which is placed in the point that it reached the maximum irradiance.

The 4 -D light field

If, in a scene of a concave object is present (for example, the inner portion of a curved hand), then the light emanating from one point of the object is blocked by a very short distance of a different point of the object. No practical instrument could determine the plenoptische function in such a scene.

Conversely, if the scene is limited to convex objects, then the plenoptische function can be ( even with a digital camera ) determined easily. Additionally, the function includes in this case even redundant information, since the radiance along the light beam is not changed. The redundant information is accurate limited to a single dimension, so that a four-dimensional function is sufficient here. Parry Moon called this 1981 Photonic field [1 6], while researchers in the field of computer graphics call it a 4-D light field [1 7 ] or Lumigramm [1 8]. Formally, the 4-D light field is defined as the radiation density along rays of light in an empty room.

The family of rays in a light box can be parameterized in different ways, a few of which are shown below. The commonly used parameterization is the two-level form (pictured below, right illustration ). This form can not represent all the rays, such as rays that are parallel to the two planes, provided that the planes are parallel to each other. The advantage is, however, that the description thereof is closely related with the analytical geometry for the perspective image. An easy way to think of the two-level form of the light field, is as a collection of many perspective -mapped images of the st- plane ( and any object above or outside is ) and each of them from a different viewing position in the uv plane is added. A light field is parameterized in this form is sometimes called light - Slab.

Be noted here is that light - Slab does not mean that the 4-D light field is equivalent to the detection of two 2-D levels with information (the latter is only two-dimensional). For example corresponds a pair of points with the positions (0,0) in the st- plane and (1,1) in the uv plane with a ray in space, but also other rays through the point ( 0,0) in the st level or the point ( 1,1) in the uv plane fall. The pair of points, however, describes only this one beam, not all of these other rays.

Analogy in acoustics

The correspondence to the 4-D light field in acoustics is the sound field or wave field as it finds application in wave field synthesis. This corresponds to the Kirchhoff - Helmholtz integral, which states that the sound field is given over time by the sound pressure on a plane without obstacles in the room. This is thus a 2-D information for each desired point and the time involving a 3-D field.

That the sound field two dimensions has ( in comparison to the four-dimensional light field ), therefore (0 -D at a point in space at a time when 1-D as a function of time) is that the light is transported into beams as a wave front of the sound due to the Huygens' principle can be modeled as a spherical wave ( in a 2-D point in space at a given time, 3-D as a function of time). Light travels in a single direction (2- D information ), while the sound propagates in all directions.

Generation

Light fields are the fundamental representations of light. They can be produced in several ways, for example by suitable computer programs or recording techniques.

In computer graphics, light fields are typically generated by rendering a 3 -D model, or by photographing a real scene. In both cases, images must be picked up from a large number of different angles to provide a light field. Depending on the parameterization of this perspective lie on a line, a plane, a sphere or a different geometry. However, it is also the choice of unstructured angle possible [5 1].

Device for receiving light fields photographically, consist of a moving, hand-held camera, automatically moving camera [3 1], cameras mounted on a sheet (as in the bullet-time effect, which is known in the film matrix ), a matrix-type arrangement of cameras [3 2] [3 3] [ 3 4], or a hand-held camera [ 3 5 ], [3 6], microscope [3 7] or an optical system having a micro lens array, which is placed in the optical path (see also plenoptische camera). Some public archives for light field data sets are given below.

The largest known light field data set ( Michelangelo 's statue of Night ) contains 24,000 1.3-megapixel images. The necessary number of images is dependent on the application. If you want a light field of an opaque object rendering (see the Applications section below) in order to look at it from all sides can, so its back to be photographed. If you want to view the object from a short distance and the property is situated on both sides of the st- plane, then images from closely spaced positions in the uv plane must be recorded with high spatial resolution ( in the above described two-level configuration).

The number and position of the images in a light field and the resolution of the individual images are referred to as the sampling of the 4-D light field. Many scientists studies were carried out to the light field sampling. A good starting point on this topic can be found in [2 1]. Are also interesting [ 2 2 ] in relation to the effect of occlusion, [ 2 3] for the effect of illumination and reflection as well as [2 4] and [2 5] for applications of the plenoptic camera and 3- D displays.

Applications

Computational Imaging refers to all methods of image formation, involving a computer. Many of these methods work with the visible wavelength range and produce a lot of light fields. Therefore, only all applications of computational photography in the art, science, engineering and medicine would have to be sought to enumerate applications for light fields.

Below are some selected applications from the field of computer graphics are listed below:

  • Lighting engineering. Gershuns reason to study the light box was to derive the illumination pattern (preferably in a closed form ), which occurs above surfaces and is caused by light sources of different position and shape. An example of this is shown on the right. A recent study, for example, [5 2].
  • Light field rendering. By 2-D slices are extracted from a 4-D light field, new views of a scene can be generated [1 7 ] [1 8]. Depending on the parameterization of the light field, these views could in perspective, orthographically, kreuzgeschlitzt [5 3], multi-perspective [5 4] or, according to another projection shape. Light field rendering is a form of image based modeling and rendering.
  • Photography synthetic aperture. By integrating a meaningful 4-D subset of a light field, the view can be approximated that would occur if the image would be recorded with a camera with a finite aperture. Such a view has a limited depth of field. By shear or curvature of the light field before this integration can be [ 5 6 ] focused on levels frontoparallele [5 5] or oblique. If a light field taken with a hand-held digital camera, the focus of the recorded images may be adjusted subsequently [ 3 5 ] (see also plenoptische camera).
  • 3- D displays. If a light box presented with a technology that assigns each sample correctly the corresponding ray in space, an autostereoscopic effect is created similar to the observation of the real scene. Non Digital technologies to perform this include integral photography, parallax panoramagrams and holography. Digital technologies for autostereoscopy include the use of lens arrays on high-resolution displays or projecting the image on a lens array with an array of projectors. The latter is combined with an array of video cameras, time-varying light fields can be recorded and projected. With both a 3-D TV system can be constructed [5 7 ] [5 8]. Imaging and predistortion of synthetic images for holographic stereograms is one of the first examples of computer-generated light fields that motivated the later work of Levoy and Hanrahan [5 9 ] [2 6].
  • Reduction of highlights. Due to multiple scattering of light and reflections within a lens highlights occur that affect the image contrast. Highlights have already been analyzed in the two-dimensional image space [5 10]. However, it is useful to regard highlights as a phenomenon in the 4-D space radiation [5 11]. Through the statistical analysis of the radiation space inside the camera highlight artifacts can be classified and hidden. In the beam space itself highlights behave like high frequency noise, which can be reduced by outlier filter. Such filtering may be performed through the detection of the light field in the interior of the camera, however, this reduces the spatial resolution of the image. For the reduction of highlights without a significant reduction in the resolution uniform / non-uniform sampling of the light beams could be applied [5 11].

Swell

Theory

  • Wigner Distribution Function and Light Fields.

Analysis

Devices

  • C.K. Liang, T. H. Lin, B.Y. Wong, C. Liu, HH Chen: Programmable Aperture Photography: Multiplexed Light Field Acquisition. Proc. ACM SIGGRAPH. , 2008.
  • A. Veeraraghavan, R. Raskar, A. Agrawal, A. Mohan, J. Tumblin: Dappled Photography: Mask Enhanced Cameras for Heterodyned Light Fields and Coded Aperture Refocusing. Proc. ACM SIGGRAPH. , 2007.

Lichtenfeld Archive

  • " The Stanford Light Field Archive"
  • " UCSD / MERL Light Field Repository"
  • "The HCI Light Field Benchmark"

Applications

  • F. Pérez, JG Marichal, JM Rodriguez: The Discrete Focal Stack Transform. (PDF, 869 kB ) Proc. EUSIPCO 2008.
  • Optics
  • Computational Physics
512145
de