Anti-aliasing / Filtering in Ray Tracing



In ray tracing / path tracing, one of the simplest way to anti-alias the image is to supersample the pixel values and average the results. IE. instead of shooting every sample through the center of the pixel, you offset the samples by some amount.

In searching around the internet, I've found two somewhat different methods to do this:

  1. Generate samples however you want and weigh the result with a filter
    • One example is PBRT
  2. Generate the samples with a distribution equal to the shape of a filter

Generate and Weigh

The basic process is:

  1. Create samples however you want (randomly, stratified, low-discrepancy sequences, etc.)
  2. Offset the camera ray using two samples (x and y)
  3. Render the scene with the ray
  4. Calculate a weight using a filter function and the distance of the sample in reference to the pixel center. For example, Box Filter, Tent Filter, Gaussian Filter, etc.) Filter Shapes
  5. Apply the weight to the color from the render

Generate in the shape of a filter

The basic premise is to use Inverse Transform Sampling to create samples that are distributed according to the shape of a filter. For example a histogram of a samples distributed in the shape of a Gaussian would be:
Gaussian Histogram

This can either be done exactly, or by binning the function into a discrete pdf/cdf. smallpt uses the exact inverse cdf of a tent filter. Examples of binning method can be found here


What are the pros and cons of each method? And why would you use one over the other? I can think of a few things:

Generate and Weigh seems to be the most robust, allowing any combination of any sampling method with any filter. However, it requires you to track the weights in the ImageBuffer and then do a final resolve.

Generate in the Shape of a Filter can only support positive filter shapes (ie. no Mitchell, Catmull Rom, or Lanczos), since you can not have a negative pdf. But, as mentioned above, it's easier to implement, since you don't need to track any weights.

Though, in the end, I guess you can think of method 2 as a simplification of method 1, since it's essentially using an implicit Box Filter weight.


Posted 2016-03-02T15:28:59.727

Reputation: 2 227

Just thinking aloud... Could you model the negative part of a filter separately to generate two sets of samples, one to be treated as positive and the other as negative? Would this allow arbitrary filters for your second approach (generate in the shape of a filter)? – trichoplax – 2016-03-02T20:02:08.893

Maybe? Lemme fiddle with it for a bit – RichieSams – 2016-03-02T20:47:44.223


Ok, if you track the zeros of the function, you can abs() the output into the pdf. Then when sampling, you can check if you're negative. Sample code here:

– RichieSams – 2016-03-02T22:31:26.867



There is a great paper from 2006 on this topic, Filter Importance Sampling. They propose your method 2, study the properties, and come out generally in favor of it. They claim that this method gives smoother rendering results because it weights all samples that contribute to a pixel equally, thereby reducing variance in the final pixel values. This makes some sense, as it's a general maxim in Monte Carlo rendering that importance-sampling will give lower variance than weighted samples.

Method 2 also has the advantage of being slightly easier to parallelize because each pixel's computations are independent of all other pixels, while in method 1, sample results are shared across neighboring pixels (and therefore have to be synchronized/communicated somehow when pixels are parallelized across multiple processors). For the same reason, it's easier to do adaptive sampling (more samples in high-variance areas of the image) with method 2 than method 1.

In the paper, they also experimented with a Mitchell filter, sampling from abs() of the filter and then weighting each sample with either +1 or −1, like @trichoplax suggested. But this ended up actually increasing the variance and being worse than method 1, so they conclude that method 2 is only usable for positive filters.

That being said, the results from this paper may not be universally applicable, and it may be somewhat scene-dependent which sampling method is better. I wrote a blog post investigating this question independently in 2014, using a synthetic "image function" rather than full rendering, and found method 1 to give more visually pleasing results due to smoothing high-contrast edges more nicely. Benedikt Bitterli also commented on that post reporting a similar issue with his renderer (excess high-frequency noise around light sources when using method 2). Beyond that, I found the main difference between the methods was the frequency of the resulting noise: method 2 gives higher-frequency, "pixel-sized" noise, while method 1 gives noise "grains" that are 2-3 pixels across, but the amplitude of noise was similar for both, so which kind of noise looks less bad is probably a matter of personal preference.

Nathan Reed

Posted 2016-03-02T15:28:59.727

Reputation: 15 036

Thanks! These are great resources. So, in the end, there are 3 methods? 1. Generate and Weigh with splatting 2. Generate and Weigh without splatting 3. Generate in the Shape of a Filter – RichieSams – 2016-03-03T15:54:38.817

Do you know of any papers, blogs, etc. that explore how to parallelize Generate and Weight with splatting? Off the top of my head, you could have a mutex per tile, or make each pixel atomic. – RichieSams – 2016-03-03T15:57:59.273

2@RichieSams I don't know why you'd use "generate and weigh without splatting", actually—that seems like it would be worse in any case than filter importance sampling. I was assuming that "generate and weigh" implies splatting. As for parallelization of splatting, off the top of my head, one way would be to split the image into tiles, but give each tile a 2‒3 pixel border to catch splats that cross the tile edge. Then in a final pass, additively composite the bordered tiles together into the final image. – Nathan Reed – 2016-03-03T18:20:25.620