Menu Close

What is rendering?

What are renderings?

If you’ve only been working with 3D for a short time, you might have wondered what exactly is meant by rendering (an important task to create a 3D configurator).

An analysis of the term from a mathematical and scientific point of view would go beyond the scope here. Therefore we will deal with the role of rendering in computer graphics in the following.

The process has analogies to film development

Rendering is the most technically complex aspect of 3D production, but can be easily understood in the context of an analogy: Just as a film photographer has to develop and print his photos before they can be displayed, so computer graphic designers are exposed to a similar necessity.

When a computer graphic artist works on a 3D scene, the models he manipulates are actually a mathematical representation of point and surfaces (more precisely, corners and polygons) in three-dimensional space.

The term rendering refers to the calculations performed by the render engine of a 3D software package to translate the scene from a mathematical approximation into a finished 2D image. The spatial, textural and lighting information of the entire scene is combined to determine the color value of each pixel in the flattened image.

The following image illustrates the computer-aided reproduction of a Bentley:

What is rendering?
What is rendering?

two different types of rendering

There are two main types of rendering, the main difference being the speed with which images are calculated and developed.

Real-time rendering is most commonly used in games and interactive graphics where images need to be computed from 3D information at incredibly high speeds.

  • Interactivity: Since it is impossible to predict exactly how a player will interact with the game environment, images must be rendered in “real time”.
  • Speed questions: For motion to appear fluid, at least 18-20 frames per second must be displayed on the screen. Anything else would not look optimal.
  • The methods: The real-time rendering is drastically improved by dedicated graphics hardware (GPUs) and by precompiling as much information as possible. Much of the lighting information in a game environment is pre-calculated and directly translated into the environment’s texture files to increase rendering speed.

Offline rendering or pre-rendering: Offline rendering is used in situations where speed is less problematic and calculations are usually performed with multi-core CPUs rather than dedicated graphics hardware.

  • Predictability: Offline rendering is most commonly used in animations and effects where visual complexity and photorealism have a much higher standard. Since there is no unpredictability about what will appear in each frame, large studios are known to spend up to 90 hours of rendering time on individual frames.
  • Photorealism: Since offline rendering takes place within an open time frame, more realistic images can be produced than with real-time rendering. Characters, environments and associated textures and lights are usually allowed in higher polygon numbers and texture files with a resolution of 4k (or higher).

three different rendering techniques

As a rule, three different rendering techniques are used in practice, which are presented below. Each has its own advantages and disadvantages so that all three options are feasible in certain situations.

Scanline rendering: A good choice if the renderings are to be created as quickly as possible. Instead of rendering an image pixel by pixel, Scanline renderers calculate on a polygon basis. Scanline techniques in combination with precalculated (baked) lighting can achieve speeds of 60 frames per second or better on a high-end graphics card.

Raytracing: In raytracing, one or more rays of light are tracked from the camera to the next 3D object for each pixel of the scene. The light beam is then guided through a fixed number of bounces, which can include reflection or refraction depending on the material of the 3D scene. The color of each pixel is algorithmically calculated based on the interaction of the light beam with the objects in its traced path. Raytracing is able to produce more photorealism than Scanline but is exponentially slower.

Radiosity: In contrast to raytracing, radiosity is calculated independently of the camera and is not pixel-oriented, but surface-oriented. The primary function of radiosity is to simulate the surface color more accurately by considering indirect illumination (pressed diffuse light). Radiosity is typically characterized by soft, graded shadows and color bleeding, where light form colored objects “bleeds” onto nearby surfaces.

In practice, radiosity and raytracing are often used in combination. On this basis, impressive and photorealistic renderings can be created.

Cre: viscircle.de

Leave a Reply

Your email address will not be published. Required fields are marked *