Path Tracing vs Ray Tracing
Path tracing is all the rage in the offline rendering space these days. From Cycles to SuperFly (based on the open source Cycles) to Octane, most new rendering engines seem to be using this technology. Sometimes referred to as “unbiased, physically correct rendering” what is path tracing, how is it different to ray tracing and is it the future of high quality offline rendering? I will be looking to answer all of those questions in this blog post for anyone confused by the changing landscape of rendering engines (note that I will be talking about offline rendering in this post, as opposed to realtime rendering).
So first up the question: what is path tracing? Unfortunately the name fails to be terribly descriptive and when I first heard about it I thought it was simply a different name for ray tracing. In fact perhaps the easiest way to explain path tracing is to compare it to the more familiar ray tracing. So in ray tracing a ray is sent out from the virtual camera into the scene and traced until it intersects with a solid body. At this point a ray is cast to each of the light sources in the scene to calculate illumination and surface shading is calculated for the intersection point. If the surface is transparent the ray is sent out further into the scene, possibly at an angle to simulate refraction. If the surface is reflective a ray is sent out at an angle away from the object. Now I often see ray tracing touted as a magic fix for rendering (usually in discussions on realtime rendering for games) in online discussions, as if ray tracing somehow provides physically accurate results. Well it doesn’t. It comes closer than triangle rasterization (the technology employed in almost all games, and what graphics cards are optimized for) but it’s no simulation of reality. It gives us reflections and refractions virtually for free and it gives very nice hard shadows (unfortunately in the real world shadows are rarely if ever perfectly sharp). So just like rasterization engines have to cheat to achieve reflections and refractions (pay close attention to reflective surfaces in games, they either reflect only a static scene, or are very blurry or reflect only objects that are on screen), a ray tracer has to cheat to get soft shadows, caustics, and global illumination to name a few effects required to achieve photo realism.
Now a path tracer is like a ray tracer on steroids. Instead of sending out one ray it sends out tens, hundreds or even thousands of rays for each pixel to be rendered. When it hits a surface it doesn’t trace a path to every light source, instead it bounces the ray off the surface and keeps bouncing it until it hits a light source or exhausts some bounce limit. It then calculates the amount of light transferred all the way to the pixel, including any colour information gathered from surfaces along the way. It then averages out the values calculated from all the paths that were traced into the scene to get the final pixel colour value. If that sounds like a rather brute force approach to you then you are right. It requires a ton of computing power and if you don’t send out enough rays per pixel or don’t trace the paths far enough into the scene then you end up with a very spotty image as many pixels fail to find any light sources from their rays. It also requires light sources to have actual sizes, a bit of a departure from traditional point light sources that have a position but are treated like an infinitely small point in space (which works fine for ray tracing and rasterization because they only care about where the light is in space, but a path tracer needs to be able to intersect the light source). Now path tracing gives us all of the things that ray tracing doesn’t give us out of the box: soft shadows, caustics and global illumination. You should still not confuse it with being a true simulation of the real world however, since it still doesn’t fully simulate complex surfaces like skin, instead relying on shader tricks like subsurface scattering to fake these. There is also a practical limit to the number of paths you can trace from each pixel and how far you can trace them before giving up. If you were to simulate photons in the real world you would have to cast billions of paths and trace them almost infinitely (well at least until they leave the area you are rendering) and you would have to do this in an environment modelled down to an atomic scale. That’s not practical so I think we will always be stuck with an approximation, after all we just need to create images that look real to humans, which is a much lower threshold than “simulate reality completely”.
So is path tracing the future of high quality rendering? I do think it is. As computers (and in particular GPUs) continue to scale up in speed path tracing continues to become more and more practical and it requires less cheating than ray tracing (and far less than rasterization). This means less work for the artists to achieve stunning photoreal results, and that’s always a good thing (I think anyone who has worked with a high end ray tracer like MentalRay can appreciate just how tricky it can be to tune all the myriad of options to achieve the result you want). That being said I think that at the present I would not recommend using a path tracer in all circumstances. In the end it is all about choosing the right tool for the job. For single images where you want as much quality as possible and don’t mind it taking potentially hours to render path tracing is great. However if you need to render a large number of images (for a comic or an animation) path tracing may not be the right choice (especially if you are a solo artist without a render farm). In that case tweaking a ray tracer can give you almost as nice results but with a fraction of the render time.
The crux of the problem is that with a path tracer you are locked into an all or nothing approach. If you turn down quality too much you get a grainy image, which you can use to preview but which is wholly unsuitable for production use. So to get a usable image you have to tweak the quality settings until you are just at the point where most of the grain is gone or use progressive refinement and let it run until it looks good (which is a great feature by the way). In contrast with a ray tracer you can generally turn off most of the expensive features (global illumination mostly) and get a very high quality result rendered very quickly. And losing GI is often not a big deal, most competent artists can quickly fake most of what you get with GI by tweaking ambient lighting and popping in a few extra weak lights in places that don’t actually have any light sources (for example to fake light bouncing off a red wall you might just place a weak red area light onto the wall that is good enough to fool just about anyone looking at the resulting image). Of course faking always requires more time and skill on the side of the artist so eventually this won’t be needed, but until path tracing times are measured in minutes per frame, as opposed to the hours or days they are now, ray tracing (or rasterization, especially micropolygon rasterizers like the one powering RenderMan) remains the better option for many classes of rendering tasks.