Summary: | Optical systems used in photography and cinema produce depth-of-field effects, that is, variations of focus with depth. These effects are simulated in image synthesis by integrating incoming radiance at each pixel over the lense aperture. Unfortunately, aperture integration is extremely costly for defocused areas where the incoming radiance has high variance, since many samples are then required for a noise-free Monte Carlo integration. On the other hand, using many aperture samples is wasteful in focused areas where the integrand varies little. Similarly, image sampling in defocused areas should be adapted to the very smooth appearance variations due to blurring. This article introduces an analysis of focusing and depth-of-field in the frequency domain, allowing a practical characterization of a light field's frequency content both for image and aperture sampling. Based on this analysis we propose an adaptive depth-of-field rendering algorithm which optimizes sampling in two important ways. First, image sampling is based on conservative bandwidth prediction and a splatting reconstruction technique ensures correct image reconstruction. Second, at each pixel the variance in the radiance over the aperture is estimated and used to govern sampling. This technique is easily integrated in any sampling-based renderer, and vastly improves performance.
|