I remembered something that Sim Dietrich said at one of the tutorials at GDC a few weeks ago, and it sort of stuck with me. Cube mapping can be thought of as six simultaneous projective textures, each with 90 degree fov, each pointing down its own axis. Well, the standard trick for emulating complex single-pass operations in OpenGL is to do it with several simpler passes. Such is the case with cube mapping.
So I grabbed the cube map images that are available on the nvidia web
site, and set about trying to project a cubic environment onto a scene.
Here's what resulted.
The multi-colored cube is actually all six view frusta each colored differently. You can see the environment projected onto all the geometry in the scene -- a teapot and four sides of a cube. As you might expect, this is not super-fast. It takes six passes, after all. For the rendering specifics, I used a WRAP_S and WRAP_T of CLAMP and set the edges of each cube face texture to alpha=0. Then I used the alpha test to cull beyond the edge. I used per-vertex lighting with a SPOT_CUTOFF of 90 to eliminiate the reverse projection. There are other ways to achieve these results as well. For more specifics on the rendering, check out the source.
You need something like GL_NV_texgen_reflection to easily do environment mapping this way, though a different technique must be employed to cull the backfacing reflection vectors. This is, of course, not the best way to do environment mapping. If you have two texture units, you're better off computing a dual-paraboloid map. If you have cube map support, you should probably use that.
At any rate, I approached this as an exercise, but the really interesting thing is that it has a practical application! One of the problems with most forms of environment mapping is that when the scene changes, the environment map needs to be recomputed. There are ways of limiting how much needs to be recomputed, but it gets kindof complicated. What if you have just one object in your scene that's dynamic, and the rest is static. Surely there's some way to optimize for this case.
There is! You can use a SINGLE 90 degree fov camera (like using
only 1 face of a cubic environment map), and have it follow the dynamic
object using a lookat call. Update this texture every frame, and leave the static environemnt
map alone. Your static environment map can be a cube map, sphere
map, or dual paraboloid map. It can be at a different resolution
than the dynamic map. You can have multiple dynamic objects as well,
but this introduces some extra complexity -- particularly if any of the reflected objects occlude one another. Here is a simple example
showing this technique applied.
This is pretty cool. What's really neat about it is that it doesn't require projective texturing or ANY special features except NV_texgen_reflection, which would be a simple extension for almost any vendor of consumer hardware to add. I actually use ARB_multitexture in this example to remove the image that would be produced by the backfacing (w.r.t. the reflected object) reflection vectors, but per-vertex lighting (with all but the specular component set to zero) is probably a more efficient and cheaper way to get this clamping. If you look at the demo, you can use the 'b' key to toggle this clamping and see the incorrect reflection as well. For the rendering specifics of this technique please see the source. Both examples should build without difficulty on win32 systems that support ARB_multitexture and NV_texgen_reflection. If your vendor does not support NV_texgen_reflection, take the time to write them and ask for it. It should not require any special hardware, and this technique is a pain to implement if the driver doesn't compute the reflection vectors for you.