Image from Wikimedia Commons. |
Adding dispersion to Photorealizer isn't my top priority, but I have given it some thought. Here are some of my ideas for implementing dispersion in Photorealizer (or any other distribution ray tracer):
Propagate light through the scene using a spectrum of a number of wavelengths, rather than just RGB triplets. For speed, trace all of the wavelengths together as you might for RGB color. Then when you hit a refractive surface, importance sample from the distribution of wavelengths using inverse transform sampling, i.e. sampling the discrete CDF of the distribution. I have 1D and 2D discrete probability distributions in Photorealizer, which have come in handy for importance sampling HDR environment maps, lights, and BSDFs. Given a wavelength and the material properties, use the Sellmeier equation to compute the index of refraction of the material at that wavelength. Shoot a ray in that direction. Use standard Monte Carlo integration and Russian roulette to give this new sample the proper weight.
You could also simplify things a little at the cost of speed by always associating a single wavelength with each ray, instead of a spectrum.
At the end, the spectral radiance data needs to be converted into RGB data that can be displayed on the screen. To accomplish this, integrate the spectrum with the CIE XYZ color matching functions to convert the data to CIE XYZ tristimulus values. (I could also convert to a more biologically-based color space like LMS, using real cone response curves, but then I'd probably end up converting to CIE XYZ anyway.) From there, convert to sRGB primaries, apply gamma correction, and finally apply dithering and map each of the RGB components to 8 bit integers.
An alternate way of converting the spectral data to RBG would be to draw the wavelengths of your camera rays from the R, B, and B sensitivity curves of your camera or your eye in the first place. This would be an intuitive approach if you only want an RGB or LMS image in the end. But if you want a physically accurate breakdown of the final image over the entire visible spectrum, it would be easier to select samples uniformly over the visible spectrum (or even beyond the visible spectrum, say if you want to include fluorescence and other cool wavelength-dependent effects of light).
This technique touches on many fields: physics, probability, colorimetry, etc. The breadth of disciplines involved is one of the things I really like about rendering. Not only is exposure to different fields interesting, it is key for creative thinking and innovation.