An image from Eric Veach's thesis. |
I had never thought about this until I read about it recently in Eric Veach's thesis. In section 5.2, Veach explains why BSDFs are non-symmetric when refraction is involved, and how to correct for this (see the thesis for a full explanation, and for a lot more great stuff). After reading Veach's explanation, I implemented a correction factor in my renderer. You can see some before and after images below.
For photon mapping, the correction factor is not needed, because photons carry flux (or power) instead of radiance, and because computing irradiance based on a photon map involves computing the photon density, which is naturally higher where the same number of photons have been squeezed into a smaller area.
In cases where light enters and then later exits a medium (or vice versa), the multipliers cancel each other out, which hides the problem. But in other cases, the problem can be very apparent. In the underwater images below, you can see that the path tracing and photon mapping versions do not match until the transmitted radiance fix is made. The path tracing image without the fix appears much too dark. If I had instead put the light under the water and taken a picture on the air side (the opposite situation), the scene would have appeared too bright rather than too dark.
Path tracing before transmitted radiance fix. |
Photon mapping before transmitted radiance fix. |
Path tracing after transmitted radiance fix. |
Photon mapping after transmitted radiance fix. |
There are some artifacts in these renders due to the lack of shading normals (Phong shading; smoothing by interpolating vertex normals across faces). My photon mapping implementation doesn't currently support shading normals, so I disabled them for these renders in order to make the best comparisons possible. Shading normals can also add bias to renders, but they can also make images look much nicer when used appropriately and carefully.
There are also some artifacts due to problems with the scene, which I've since fixed. (In my next post I'll post a shot of the scene from above the water, cleaned up, and with shading normals.) I found the ladder on SketchUp, and then modified it in Maya (but I probably should have just made one from scratch for maximum control and to make it as clean as possible), and modeled the rest of the scene myself, mostly in Houdini. Houdini is nice for certain kinds of procedural modeling, and it makes it really easy to go back and modify anything.
Usually (depending on the settings), upon the first diffuse (or glossy) bounce, I shoot multiple rays. If the ray from the camera encounters a refractive surface before the first diffuse bounce, I split the ray, and then split the number of rays allocated to the first diffuse bounce proportionally between the reflected and refracted paths (deeper interactions with refractive surfaces use Russian Roulette to select between reflection and refraction to avoid excessive branching). As I was making these renders I caught and fixed a importance sampling bug where, after total internal reflection, the number of rays allocated to both paths was just one, instead of a representative fraction of the total. Since there is a lot of total internal reflection visible on the bottom of the water surface in this shot, this bug was causing the reflection of the pool interior to converge much more slowly than the direct view of the pool interior.
No comments:
Post a Comment