You are here:: Home

Rendering Farm

E-mail Print PDF

Rendering Farm

Parallel rendering (or Distributed rendering) is a method used to improve the performance of computer graphics creation software.

The rendering of graphics requires massive computational resources for complex objects like medical visualization, iso-surface generation, and some CAD applications. Traditional methods like ray tracing, 3D textures, etc., work extremely slowly in simple machines. Furthermore, virtual reality and visual simulation programs, which render to multiple display systems concurrently, are applications for parallel rendering.

Subdivision of work

Parallel rendering divides the work to be done and processes it in parallel. For example, if we have a non-parallel ray-casting application, we would send rays one by one to all the pixels in the view frustum. Instead, we can divide the whole frustum into some x number of parts and then run that many threads or processes to send rays in parallel to those x tiles. We can use a cluster of machines to do such a thing and then composite the results. This is parallel rendering.


Non-interactive parallel rendering

Traditional parallel rendering is a great example of what is meant by embarrassingly parallel in that the frames to be rendered are distributed amongst the available compute nodes. For instance, one frame is rendered on one compute node. Multiple frames can be processed because there are multiple nodes. A truly parallel process can distribute a frame across multiple nodes using a tightly coupled cross communication methodology to process frames by orders of magnitude faster. In this way, a full-rendering job consisting of multiple frames can be edited in real-time enabling designers to do better work faster.

Interactive parallel rendering

In interactive parallel rendering, there are different approaches of distributing the rendering work, which have different advantages and disadvantages.

Pixel Decompositions

Pixel decompositions divide the pixels of the final view evenly, either by dividing full pixels or sub-pixels. The first 'squeezes' the view frustum, while the second renders the same scene with slightly modified camera positions for full-screen anti-aliasing or depth-of-field effects. Pixel decompositions do composite the pixels side-by-side, while sub-pixel decompositions blend all sub-pixels to compute the final pixel.

In contrast to sort-first, no sorting of rendered primitives takes place since all rendering resources render more or less the same view. Pixel decompositions are inherently load-balanced, and are ideal for purely fill-limited applications such as raytracing and 3D volume rendering.


DPlex rendering distributes full, alternating frames to the individual rendering nodes. It scales very well, but increases the latency between user input and final display, which is often irritating for the user. Stereo decomposition is used for immersive applications, where the individual eye passes are rendered by different rendering units. Passive stereo systems are a typical example for this mode.

Parallel rendering can be used in graphics intensive applications to visualize the data more efficiently by adding resources like more machines.