Monday, April 30, 2012

Dispersion

Image from Wikimedia Commons.
One thing that's currently missing from Photorealizer (and many commercial and production renderers, too) is dispersion. Dispersion is what causes white light to split into its component spectrum through a prism, what causes rainbows, and what causes chromatic aberration when looking through a lens (which often appears as fringes of color at the edges of objects). (It's also one of the many things that could make my purple glass bunny render more realistic, which is what got me thinking about it.) Dispersion happens because the speed that light travels in a medium depends on the frequency of the light. In other words, the refractive index (the ratio of the speed of light in a vacuum to the speed of light in the material) depends on the frequency of the incident light (I'm saying frequency instead of wavelength here because the wavelength changes depending on the medium, while the frequency stays the same). (By the way, amazingly, the refractive index is the square root of the dielectric constant.) In particular, lower frequencies (longer wavelengths) result in lower refractive indexes. For example, the refractive index of glass is lower for red light than for blue light, so a higher proportion of the red light will be transmitted, and it won't be bent or refracted as much as it's transmitted.

Adding dispersion to Photorealizer isn't my top priority, but I have given it some thought. Here are some of my ideas for implementing dispersion in Photorealizer (or any other distribution ray tracer):

Propagate light through the scene using a spectrum of a number of wavelengths, rather than just RGB triplets. For speed, trace all of the wavelengths together as you might for RGB color. Then when you hit a refractive surface, importance sample from the distribution of wavelengths using inverse transform sampling, i.e. sampling the discrete CDF of the distribution. I have 1D and 2D discrete probability distributions in Photorealizer, which have come in handy for importance sampling HDR environment maps, lights, and BSDFs. Given a wavelength and the material properties, use the Sellmeier equation to compute the index of refraction of the material at that wavelength. Shoot a ray in that direction. Use standard Monte Carlo integration and Russian roulette to give this new sample the proper weight.

You could also simplify things a little at the cost of speed by always associating a single wavelength with each ray, instead of a spectrum.

At the end, the spectral radiance data needs to be converted into RGB data that can be displayed on the screen. To accomplish this, integrate the spectrum with the CIE XYZ color matching functions to convert the data to CIE XYZ tristimulus values. (I could also convert to a more biologically-based color space like LMS, using real cone response curves, but then I'd probably end up converting to CIE XYZ anyway.) From there, convert to sRGB primaries, apply gamma correction, and finally apply dithering and map each of the RGB components to 8 bit integers.

An alternate way of converting the spectral data to RBG would be to draw the wavelengths of your camera rays from the R, B, and B sensitivity curves of your camera or your eye in the first place. This would be an intuitive approach if you only want an RGB or LMS image in the end. But if you want a physically accurate breakdown of the final image over the entire visible spectrum, it would be easier to select samples uniformly over the visible spectrum (or even beyond the visible spectrum, say if you want to include fluorescence and other cool wavelength-dependent effects of light).

This technique touches on many fields: physics, probability, colorimetry, etc. The breadth of disciplines involved is one of the things I really like about rendering. Not only is exposure to different fields interesting, it is key for creative thinking and innovation.

No comments:

Post a Comment