Short, figure-it-out-yourself explanation: Fog is rendered by drawing the objects inside the fog and the back faces of the fog with an alpha blend. The alpha value is computed from fog math based on a distance which is an approximation to the raycast from the point being fogged to the point on the front of the fog in the direction of the camera. To compute this "raycast", the stencil buffer is used to consider each of the possible front faces separately, one at a time. For each face, all the rays that hit that face are processed, since this is simply all the pixels that the face draws. For each object in the fog, there is one extra pass for each visible front face of fog; e.g., for cube fog, 1-3 extra passes. The fog volume itself is drawn that many times plus 3 additional times; i.e. a cube fog right in your face (only one visible front face) will paint the entire screen 4 times. This is reasonable for a cube but would be horrible for a highly tesselated sphere or a convexified teapot. Texgen and a carefully constructed texture are used to compute the distance from a given point to a single plane; each visible front face is stenciled and processed independently to make this possible. The texture can also precompute exponential fog, although there may be precision issues. Long explanation: Because each fog volume is convex, it is easy to use the stencil buffer to determine which pixels are inside the fog. Unlike shadowing, however, that's only half the battle. The fog volume is drawn twice to locate where fogging needs to occur. One time backfaces are drawn, the other time front faces are drawn. To make things simple, I set the following stencil buffer bits: 1 backface zbuffer succeeded (backface of fog is visible) 2 backface zbuffer failed (backface of fog is invisible) 4 frontface zbuffer succeeded This leads to the following combinations at any given pixel: 4 + 1 entire fog volume is visible 4 + 2 interior object is visible 4 cannot occur with valid fog volume 1 back of fog is visible, front of fog behind near clip plane 2 interior object is visible, front of fog behind near clip or object is visible, front of fog behind object 0 outside fog volume These values require the following actions: 4+1 draw backside of fog, fogged to front polygon 4+2 draw interior object, fogged to front polygon 1 draw backside of fog, fogged to near clip 2 draw interior object, fogged to near clip or do nothing Fogging to near clip (and not camera location) is necessary to maintain continuity, and conveniently avoids a problematic divide by 0. To distinguish between the two '2' cases, another pass is made to test for a zfail on the frontside of fog surfaces, which 0s the stencil buffer. The actual code flow goes like this: // assume stencil buffer is all 0 draw backfaces of fog, setting to '1' or '2' depending on ztest for every front face of fog draw this face, oring in '4' in the stencil bufferj enable fog rendering modes for thie face (texgen, texture, blend mode) set stencil test to 4+2, write 0 on zsuccess draw all objects that overlap the fog set stencil test to 4+1, write 0 on success draw all backfaces of the fog // now handle the "virtual" front face, the near clip plane enable fog rendering modes for near plane set stencil test to 2, write 0 on success draw all objects that overlap the fog set stencil test to 1, write 0 in all cases (just in case) draw all backfaces of the fog In my actual testbed, I iterate over all faces of the fog, not just the front faces, hampering performance but not affecting the ouptut at all (since such faces don't render and thus don't touch the stencil buffer). A subtlety often missed in multi-pass rendering is that the use of "z <=" testing means multiple independent fragements may write to the same pixel if a surface is self-intersecting (or self-occluding without sufficient z precision). Normally, this isn't the end of the world, but in multipass, writing to the pixel twice opaquely, and then writing to it twice transparently will produce the wrong result (e.g. double fogging, here). The algorithm above avoids this for free by clearing the stencil buffer once a pixel is fogged. The only trick left is how to compute the distance to a single plane. It is important that the math involved provide consistent results at the boundary between two faces, since fog should appear continuous (C0 but not C1 at those boundaries). This means a number of simple/naive solutions are not viable, and basically demands implementing the true math. The big problem is that this function is not monotonic as you move linearly through space; the distance from the camera to a line of points is shortest at the point closest to the center. Traditional fog avoids this by using "depth fog", which results in some artifacts (notably orientation-dependent fogging), especially with very wide fields of view. My solution is a similar approximation--in fact, if the camera is inside the fog, it is identical. (For linear fog, it is possible to introduce a second texture which compensates for this effect; however, for exponential fog, this would require an exponentiating texture blend function.) The basic idea is to decompose the distance to plane math into two factors which are multiplied together. Rather than using two textures, however, which would introduce range problems (one value may be very small while the other is large), in this case the two factors are each functions of single variables that can be texgenned into a single texture; instead of computing two textures F(s) * G(s'), I build a texture H(s,t) which computes F(s)*G(t). It turns out that the distance from a point P to the plane along a line to the eye at the origin (without loss of generality) can be computed by the formula dist = ||P|| * (1 + d / (P.N)) or the equivalent dist = ||P|| * (( + d) / (P.N)) That is, the distance is equal to the distance from the point to the eye times the ratio of the distance from the point to the plane vs. the point to the plane plus the eye to the plane. The distance from the point P to the eye, ||P||, is approximated as the z depth, and texgen'd into S. The functiom (P.N)/d is particular to this plane, and texgen'd into T. The texture lookup on s,t returns a value which is a fog amount for the distance as computed above. (S is rescaled to the maximum possible view distance; T is rescaled to allow the ratio to range from 0..16 or so.) Because the substitution of Z for ||P|| is independent of the chosen plane, this approximation doesn't prevent boundaries from matching up. Moreover, even though the texture can only approximate the actual function (since it's discretely sampled and interpolated), there's no problem with the boundaries being computed slightly differently for different faces; since, regardless of the face, the true distance is identical at the boundaries, and the eye distance is identical at the boundaries, it follows that (1 + d/(P.N)) must be identical at the boundaries (even though are different for the different faces), from which it follows that T = (P.N)/d must be identical at the boundaries. Thus the same texture coordinates are generated at the boundaries, to the limits of the precision of the texgen computation. (This was verified in the test application by viewing S and T independently instead of the fog computation.) This means a small texture can be used, since the approximations caused by reducing the texture cannot show up as visual discontinuities. I didn't intentionally design this property into the way I factored the distance math--I was just looking for some function that could be texgen'd--but it's worth remembering for similar future situations. (Of course, most of the time we work with continuous vertex data, not facetted vertex data, so it's never an issue.)